report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The government contracting process provides for consideration of various aspects of contractor performance at multiple points: Past performance as source selection factor: Only relatively recently have federal agencies been required to consider past performance in selecting their contractors. In 1997, the Federal Acquisition Regulation (FAR) was modified to require that agencies consider past performance information as an evaluation factor in source selection. Past performance is now required to be an evaluation factor in selecting contractors, along with factors such as price, management capability, and technical approach to the work. Responsibility determinations: Once a contractor is selected for award, the contracting officer must make an affirmative determination that the prospective awardee is capable and ethical. This is known as a responsibility determination, and includes, for example, whether a prospective awardee has adequate financial resources and technical capabilities to perform the work, has a satisfactory record of integrity and business ethics, and is eligible to receive a contract under applicable laws and regulations. As part of the responsibility determination, the contracting officer also must determine that the prospective awardee has a “satisfactory performance record” on prior contracts. This determination of the prospective awardee’s responsibility is separate from the comparison of the past performance of the competing offerors conducted for purposes of source selection. Surveillance of performance under the current contract: Once a contract is awarded, the government should monitor a contractor’s performance throughout the performance period. Surveillance includes oversight of a contractor’s work to provide assurance that the contractor is providing timely and quality goods or services and to help mitigate any contractor performance problems. An agency’s monitoring of a contractor’s performance may serve as a basis for past performance evaluations in future source selections. GAO reported in March, 2005 on shortfalls at DOD in assigning and training contract surveillance personnel, and recommended improvements in this area. Suspension and debarment: Contractor performance also comes into play in suspensions and debarments. A suspension is a temporary exclusion of a contractor pending the completion of an investigation or legal proceedings, while a debarment is a fixed-term exclusion lasting no longer than 3 years. To protect the government’s interests, agencies can debar contractors from future contracts for various reasons, including serious failure to perform to the terms of a contract. Suspensions and debarments raise a whole set of procedural and policy issues beyond past performance, not the least of which is the question of whether these are useful tools in an environment in which recent consolidations have resulted in dependence on fewer and larger government contractors. Questions have also been raised about whether delinquent taxes or an unresolved tax lien should result in suspension or debarment. A proposed revision to the FAR would list these tax issues as grounds for suspension or debarment. In July 2005, GAO reported on the suspension and debarment process at several federal agencies and recommended ways to improve the process. In the Federal Acquisition Streamlining Act (FASA) of 1994, Congress stated that in the award of contracts, agencies should consider the past performance of contractors to assess the likelihood of successful performance of the contract. FASA required the adoption of regulations to reflect this principle, and the FAR now requires the consideration of past performance in award determinations. The Office of Federal Procurement Policy (OFPP) has issued guidance on best practices for using past performance information in source selection, and individual agencies have issued their own guidance on implementing the FAR requirements. For agencies under the FAR, a solicitation for a contract must disclose to potential offerors all evaluation factors that will be used in selecting a contractor. Agencies are required to consider past performance in all negotiated procurements above the simplified acquisition threshold of $100,000 and in all procurements for commercial goods or services. Although past performance must be a significant evaluation factor in the award process, agencies have broad discretion to set the precise weight to be afforded past performance relative to other factors in the evaluation scheme. Whatever they decide about weights, agencies must evaluate proposals in accordance with the evaluation factors set forth in the solicitation, and in a manner consistent with applicable statutes and regulations. Agencies must allow offerors to identify past performance references in their proposals, but also may consider information obtained from any other source. In evaluating an offeror’s past performance, the agency must consider the recency and relevance of the information to the current solicitation, the source and context of the information, and the general trends in the offeror’s past performance. Offerors who do not have any past performance may not be evaluated favorably or unfavorably. That is, they must receive a neutral rating. In addition, the OFPP has issued guidance on best practices for considering past performance data. Consistent with the FAR, OFPP guidance states that agencies are required to assess contractor performance after a contract is completed and must maintain and share performance records with other agencies. The guidance encourages agencies to make contractor performance records an essential consideration in the award of negotiated acquisitions, and gives guidelines for evaluation. It also encourages agencies to establish automated mechanisms to record and disseminate performance information. If agencies use manual systems, the data should be readily available to source selection teams. Performance records should specifically address performance in the areas of: (1) cost, (2) schedule, (3) technical performance (quality of product or service), and (4) business relations, including customer satisfaction, using a five-point rating scale. Agencies may also issue their own supplemental regulations or guidance related to past performance information. All of the three largest departments in federal procurement spending - the Department of Defense, the Department of Energy, and the Department of Homeland Security - provide at least some additional guidance in the use of past performance data, addressing aspects such as the process to be followed for considering past performance during contract award and what systems will be used to store and retrieve past performance data. Below are some examples that illustrate the types of guidance available. DOD offers instruction on using past performance in source selection and contractor responsibility determinations through the Defense Federal Acquisition Regulation Supplement and related Procedures, Guidance, and Information. DOD’s Office of Defense Procurement and Acquisition Policy also has made available a guide that provides more detailed standards for the collection and use of past performance information, including criteria applicable to various types of contracts. DOE also provides additional guidance to contracting officers in the form of an acquisition guide that discusses current and past performance as a tool to predict future performance, including guidelines for assessing a contractor’s past performance for the purpose of making contract award decisions as well as for making decisions regarding the exercise of contract options on existing contracts. At DHS, the department’s supplemental regulations outline which systems contracting officers must use to input and retrieve past performance data. Specifically, contracting officers and contracting officer representatives are required to input contractor performance data into the Contractor Performance System, managed by the National Institutes of Health, and use the Past Performance Information Retrieval System (PPIRS) - which contains contractor performance ratings from multiple government systems - to obtain information on contractor past performance to assist with source selection. Although a seemingly simple concept, using past performance information in source selection can be complicated in practice. GAO has not evaluated the practices that agencies use regarding contractor past performance information in source selection or whether those practices promote better contract outcomes. Our bid protest decisions, however, illustrate some of the complexities of using past performance information as a predictor of future contractor success. Some of these issues are listed below. In all of these cases, the key consideration is whether the performance evaluated can reasonably be considered predictive of the offeror’s performance under the contract being considered for award. Who: One issue is whose performance agencies should consider. Source selection officials are permitted to rate the past performance of the prime contractor that submits the offer, the key personnel the prime contractor plans to employ, the major teaming partners or subcontractors, or a combination of any or all of these. For example, in one case, GAO found that the agency could consider the past performance of a predecessor company because the offeror had assumed all of the predecessor’s accounts and key personnel, technical staff, and other employees. In another case, GAO held that an agency could provide in a solicitation for the evaluation of the past performance of a corporation rather than its key personnel. What: Also at issue is what information agencies are required or permitted to consider in conducting evaluations of past performance. The issue is one of relevancy. Agencies must determine which of the contractor’s past contracts are similar to the current contract in terms of size, scope, complexity, or contract type. For example, is past performance building single family homes relevant to a proposal to build a hospital? Agencies do not have to consider all available past performance information. However, they should consider all information that is so relevant that it cannot be overlooked, such as an incumbent contractor’s past performance. In one case, GAO found that an agency reasonably determined that the protester’s past performance on small projects was not relevant to a contract to build a berthing wharf for an aircraft carrier. When: Agencies also have to determine the period of time for which they will evaluate the past performance of contractors. Agencies are required to maintain performance data for 3 years after the conclusion of a contract although agencies have discretion as to the actual length of time they consider in their evaluation of past performance and could, for example, choose a period longer than 3 years. In one case, GAO held that although the solicitation required the company to list contracts within a 3-year time frame, the agency could consider contract performance beyond this timeframe because the solicitation provided that the government may “consider information concerning the offeror’s past performance that was not contained in the proposal.” Where: Once agencies determine who they will evaluate, what information they will consider, and the relevant time frame, they still may have difficulties obtaining past performance information. Agencies can obtain past performance information from multiple sources, including databases such as PPIRS - a centralized, online database that contains federal contractor past performance information. However, in 2006, the General Services Administration noted that PPIRS contains incomplete information for some contractors. Agencies may also obtain information from references submitted with proposals and reference surveys. One case illustrates how an agency evaluated a company based on limited past performance information. The agency assigned the company a neutral rating because the agency did not receive completed questionnaires from the company’s references listing relevant work and the solicitation provided that it was the company’s obligation to ensure that the past performance questionnaires were completed and returned. These are just some of the many issues that have been the subject of protests involving the use of past performance. Our cases are not necessarily representative of what may be occurring throughout the procurement system, but they do provide a window that allows us to get a glimpse of how the issue is handled across a number of agencies. At a minimum, however, our cases suggest that the relatively straightforward concept of considering past performance in awarding new contracts has given rise to a number of questions that continue to surface as that concept is implemented. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information regarding this testimony, please contact William T. Woods at (202) 512-4841 or woodsw@gao.gov. Individuals making key contributions to this testimony included Carol Dawn Petersen, E. Brandon Booth, James Kim, Ann Marie Udale, Anne McDonough-Hughes, Kelly A. Richburg, Marcus Lloyd Oliver, Michael Golden, Jonathan L. Kang, Kenneth Patton, and Robert Swierczek. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government is the largest single buyer in the world, obligating over $400 billion in fiscal year 2006 for a wide variety of goods and services. Because contracting is so important to how many agencies accomplish their missions, it is critical that agencies focus on buying the right things the right way. This includes ensuring that contracts are awarded only to responsible contractors, and that contractors are held accountable for their performance. Use of contractor performance information is a key factor in doing so. This testimony covers three main areas concerning the use of contractor performance information: (1) the various ways in which a contractor's performance may be considered in the contracting process; (2) how information on past performance is to be used in selecting contractors, as well as the various mechanisms for how that occurs; and (3) some of the key issues that have arisen in considering past performance in source selection, as seen through the prism of GAO's bid protest decisions. GAO has previously made recommendations for improving the use of contractor performance information, but is not making any new recommendations in this testimony. The government contracting process provides for consideration of various aspects of contractor performance at multiple points: (1) Source selection: Past performance is required to be an evaluation factor in selecting contractors, along with factors such as price, management capability, and technical approach to the work. (2) Responsibility determinations: Once a contractor is selected for award, the contracting officer must make a responsibility determination that the prospective awardee is capable and ethical. This includes, for example, whether the prospective awardee has a satisfactory performance record on prior contracts. (3) Surveillance under the current contract: Once a contract is awarded, the government monitors a contractor's performance throughout the performance period, which may serve as a basis for performance evaluations in future source selections. (4) Debarment: To protect the government's interests, agencies can debar, that is preclude, contractors from receiving future contracts for various reasons, including serious failure to perform to the terms of a contract. Agencies are required to consider past performance in all negotiated procurements above the simplified acquisition threshold of $100,000 and in all procurements for commercial goods or services. Although past performance must be a significant evaluation factor in the award process, agencies have broad discretion to set the precise weight to be afforded to past performance relative to other factors in the evaluation scheme. Whatever they decide about weights, agencies must evaluate proposals in accordance with the evaluation factors set forth in the solicitation, and in a manner consistent with applicable statutes and regulations. In evaluating an offeror's past performance, the agency must consider the recency and relevance of the information to the current solicitation, the source and context of the information, and general trends in the offeror's past performance. The key consideration is whether the performance evaluated can reasonably be considered predictive of the offeror's performance under the contract being considered for award. Although a seemingly simple concept, using past performance information in source selections can be complicated in practice. GAO bid protest decisions illustrate some of the complexities of using past performance information as a predictor of future contractor success. Some of the questions raised in these cases are: (1) Who: Whose performance should the agencies consider? (2) What: What information are agencies required or permitted to consider in conducting evaluations of past performance? (3) When: What is the period of time for which agencies will evaluate the past performance of contractors? (4) Where: Where do agencies obtain contractor performance information?
When TSA began offering expedited screening at airports in the summer of 2011, transportation security officers (TSO) initially provided such screenings in standard lanes to passengers aged 12 and younger, and subsequently extended expedited screening to certain flight crew members and then to passengers aged 75 and older. In October 2011, TSA began to expand the concept of expedited airport screening to more of the flying public by piloting the TSA PreTM program. This pilot program allowed certain frequent fliers of two air carriers to experience expedited screening at four airports. These frequent fliers became eligible for screening in dedicated expedited screening lanes, called TSA PreTM lanes, because they had opted into the TSA PreTM program through the air carrier with which they had attained frequent flier status. TSA also allowed certain members of the U.S. Customs and Border Protection’s (CBP) Trusted Traveler programs to experience expedited screening as part of the TSA PreTM pilot. TSA provided expedited screening in dedicated screening lanes to these frequent fliers and eligible CBP Trusted Travelers during the TSA PreTM pilot program because TSA used information available to it to determine that eligible passengers in these groups were lower risk. When traveling on one of the air carriers and departing from one of the airports participating in the pilot, these passengers were eligible to be screened in dedicated TSA PreTM screening lanes where the passengers were not required to remove their shoes; divest light outerwear, jackets, and belts; or remove liquids, gels, and laptops from carry-on baggage. Since October 2011, TSA has further expanded the known traveler populations eligible for expedited screening. After TSA piloted TSA PreTM with certain passengers who are frequent fliers and members of CBP’s Trusted Traveler programs, TSA established separate TSA PreTM lists for additional low-risk passenger populations, including members of the U.S. armed forces, Congressional Medal of Honor Society Members, members of the Homeland Security Advisory Council, and Members of Congress, among others.other agencies or entities, TSA created its own TSA PreTM list composed of individuals who apply to be preapproved as low-risk travelers through the TSA PreTM Application Program, an initiative launched in December 2013. To apply, individuals must visit an enrollment center where they provide biographic information (i.e., name, date of birth, and address), valid identity and citizenship documentation, In addition to TSA PreTM lists sponsored by and fingerprints to undergo a TSA Security Threat Assessment. TSA leveraged existing federal capabilities to both enroll and conduct threat assessments for program applicants using enrollment centers previously established for the Transportation Worker Identification Credential Program, and existing transportation vetting systems to conduct applicant Applicants must be U.S. citizens, U.S. nationals or threat assessments.lawful permanent residents, and cannot have been convicted of certain crimes. As of March 2015, about 7.2 million individuals were eligible, through TSA PreTM lists, for expedited screening. Figure 1 shows the populations for each TSA PreTM list. In addition to passengers who are included on one of the TSA PreTM lists, in October 2013, TSA continued to expand the opportunities for expedited screening to broader groups of passengers through the TSA PreTM Risk Assessment program and the Managed Inclusion process, both of which are described in greater detail below. Figure 2 shows a snapshot from February 25, 2015, through March 3, 2015, of the percentage of weekly passengers receiving non-expedited screening and expedited screening, and further shows whether known crew members experienced expedited screening, and whether expedited screening occurred in TSA PreTM lanes (for passengers designated as known travelers or through the TSA PreTM Risk Assessment program, or passengers chosen for expedited screening using Managed Inclusion), or in standard lanes. As noted in figure 2, during the week ending March 3, 2015, 28 percent of passengers nationwide received expedited screening were issued TSA PreTM boarding passes, but were provided expedited screening in a standard screening lane, meaning that they did not have to remove their shoes, belts, and light outerwear, but they had to divest their liquids, gels, and laptops. TSA provides expedited screening to TSA PreTM -eligible passengers in standard lanes when airports do not have dedicated TSA PreTM screening lanes because of airport space constraints and limited TSA PreTM throughput. As we found in 2014, TSA determines a passenger’s eligibility for or opportunity to experience expedited screening at the airport using one of three risk assessment methods. These include (1) inclusion on a TSA PreTM list of known travelers, (2) identification of passengers as low risk by TSA’s Risk Assessment algorithm, or (3) a real-time threat assessment at the airport using the Managed Inclusion process. TSA has determined that the individuals included on the TSA PreTM lists of known travelers are low risk by virtue of their membership in a specific group or based on group vetting requirements. For example, TSA determined that members of the Congressional Medal of Honor Society, a group whose members have been awarded the highest U.S. award for valor in action against enemy forces, present a low risk to transportation security and are appropriate candidates to receive expedited screening. In other cases, TSA determined that members of groups whose members have undergone a security threat assessment by the federal government, such as individuals working for agencies in the intelligence community and who hold active Top Secret/Sensitive Compartmentalized Information clearances, are low risk and can be provided expedited screening. Similarly, TSA designated all active and reserve service members of the United States armed forces, whose combined members total about 2 million people, as a low risk group whose members were eligible for expedited screening. TSA determined that active duty military members were low risk and appropriate candidates to receive expedited screening because the Department of Defense administers common background checks of its members. Members of the list-based, low-risk populations who requested, or were otherwise deemed eligible, to participate in TSA PreTM were provided a unique known traveler number. Their personal identifying information (name and date of birth) along with the known traveler number are included on lists used by Secure Flight for screening. To be recognized as low risk by the Secure Flight system, individuals on TSA PreTM lists with known traveler numbers must submit these numbers when making a flight reservation. maintain them by ensuring that individuals continue to meet the criteria for inclusion and to update the lists as needed. TSA also continues to provide expedited screening on a per-flight basis to the almost 1.5 million frequent fliers who opted to participate in the TSA PreTM program pilot. According to TSA, this group of eligible frequent fliers met the standards set for the pilot based on their frequent flier status as of October 1, 2011. According to TSA officials, TSA determined that these frequent fliers were an appropriate population to include in the program for several reasons, including the fact that frequent fliers are vetted against various watchlists, such as the No-Fly list, each time they travel to ensure that they are not listed as known or suspected terrorists, and are screened appropriately at the checkpoint. As we found in December 2014, the TSA PreTM Risk Assessment program evaluates passenger risk based on certain information available for the passenger’s specific flight and determines the likelihood that passengers will be designated as eligible to receive expedited screening through TSA PreTM. Beginning in 2011, TSA piloted the process of using the Secure Flight system to obtain Secure Flight Passenger Data from air carriers and other data to assess whether the passenger is low risk on a per-flight basis and thus eligible to receive a TSA PreTM designation on his or her boarding pass to undergo expedited screening. In September 2013 after completing this pilot, TSA decided to explore expanding this risk assessment approach to every traveler. In order to develop the set of low-risk rules used to determine the passengers’ relative risk, TSA formed an Integrated Project Team consisting of officials from the Offices of Security Operations, Intelligence and Analysis, Security Capabilities, and Risk-Based Security. The team used data from multiple sources, including passenger data from the Secure Flight system from calendar year 2012, to derive a baseline level of relative risk for the entire passenger population. Our review of TSA’s documentation in our 2014 report showed that TSA considered the three elements of risk assessment—Threat, Vulnerability, and Consequence—in its development of the risk assessment. These three elements constitute the framework for assessing risk as called for in the Department of Homeland Security’s National Infrastructure Protection Plan. We found that TSA worked with a contractor to evaluate the data elements taken from information available for passengers’ specific flights and the proposed risk model rules used to determine the baseline level of relative risk. In its assessment of the algorithm used for the analysis, the contractor agreed with TSA’s analysis of the relationship between the data elements and relative risk assigned to the data elements. TSA officials stated that as of March 2015, the agency is continuing to refine the algorithm to include additional variables to help determine passenger risk. As we found in December 2014, although TSA determined that certain combinations of data elements in its risk-based algorithm are less likely to include unknown potential terrorists, it also noted that designating passengers as low risk based solely on the algorithm carries some risk. To mitigate these risks, TSA uses a random exclusion factor that places passengers, even those who are otherwise eligible for expedited screening, into standard screening a certain percentage of the time. TSA adjusts the level of random exclusion based on the relative risk posed by the combinations of various data elements used in the algorithm. The result is that passengers associated with some data combinations that carry more risk are randomly excluded from expedited screening more often than passengers associated with other data combinations. For example, TSA’s assessment indicated that combinations of certain data elements are considered relatively more risky than other data groups and passengers who fit this profile for a given flight should seldom be eligible for expedited screening, while combinations of other data on a given flight pose relatively less risk and therefore passengers who fit these combinations could be made eligible for expedited screening a majority of the time. TSA developed a risk algorithm that scores each passenger on each flight, and passengers with a high enough score receive a TSA PreTM boarding pass designation making them eligible for expedited screening for that trip. As we found in December 2014, Managed Inclusion is designed to provide expedited screening to passengers not deemed low risk prior to arriving at the airport. TSA uses Managed Inclusion as a tool to direct passengers who are not on a TSA PreTM list, or designated as eligible for expedited screening via the TSA PreTM Risk Assessments, into the expedited screening lanes to increase passenger throughput in these lanes when the volume of TSA PreTM -eligible passengers is low. In addition, TSA developed Managed Inclusion to improve the efficiency of dedicated TSA PreTM screening lanes as well as to help TSA reach its internal goal of providing expedited screening to at least 25 percent of passengers by the end of calendar year 2013. To operate Managed Inclusion, TSA randomly directs a certain percentage of passengers not previously designated that day as eligible for expedited screening to the TSA PreTM expedited screening lane. To screen passengers who have been randomly directed into the expedited screening lane, TSA uses real time threat assessments including combinations of Behavior Detection Officers (BDOs), canine teams and Explosives Trace Detection (ETD) devices to ensure that passengers do not exhibit high-risk behaviors or otherwise present a risk at the airport. According to TSA, it designed the Managed Inclusion process using a layered approach to provide security when providing expedited screening to passengers via Managed Inclusion. Specifically, these layers include (1) the Secure Flight vetting TSA performs to identify high-risk passengers required to undergo enhanced screening at the checkpoint and to ensure these passengers are not directed to TSA PreTM expedited screening lanes, (2) a randomization process that TSA uses to include passengers into TSA PreTM screening lanes who otherwise were not eligible for expedited screening, (3) BDOs who observe passengers and look for certain high-risk behaviors, (4) canine teams and ETD devices that help ensure that passengers have not handled explosive materials prior to travel, and (5) an unpredictable screening process involving walk-through metal detectors in expedited screening lanes that randomly select a percentage of passengers for additional screening. When passengers approach a security checkpoint that is operating Managed Inclusion, they approach a TSO who is holding a randomizer device, typically an iPad that directs the passenger to the expedited or standard screening lane. TSA officials stated that the randomization layer of security is intended to ensure that passengers cannot count on being screened in the expedited screening lane even if they use a security checkpoint that is operating Managed Inclusion. FSDs can adjust the percentage of passengers randomly sent into the Managed Inclusion lane depending on specific risk factors. Figure 3 illustrates how these layers of security operate when FSDs use Managed Inclusion lanes. According to TSA, it designed the Managed Inclusion process to use BDOs stationed in the expedited screening lane as one of its layers of security when Managed Inclusion is operational to observe passengers’ behavior as they move through the security checkpoint queue. When BDOs observe certain behaviors that indicate a passenger may be higher risk, the BDOs are to refer the passenger to a standard screening lane so that the passenger can be screened using standard or enhanced screening procedures. In our November 2013 report on TSA’s behavior detection and analysis program, we concluded that although TSA had taken several positive steps to validate the scientific basis and strengthen program management of behavior detection and analysis program, TSA had not demonstrated that BDOs can reliably and effectively identify high- risk passengers who may pose a threat to the U.S. aviation system. Further, we recommended that the Secretary of Homeland Security direct the TSA Administrator to limit future funding support for the agency’s behavior detection activities until TSA can provide scientifically validated evidence that demonstrates that behavioral indicators can be used to identify passengers who may pose a threat to aviation security. The Department of Homeland Security did not concur with this recommendation, in part, because it disagreed with GAO’s analysis of TSA’s behavioral indicators. In February 2015, TSA officials told us that they had revised the behavioral indicators, were conducting pilot tests on the use of new BDO protocols, and anticipated concluding the testing at 5 airports in late 2015. At that time, TSA plans to make a determination about whether the new protocols are ready for further testing, including an operational test in 10 airports to determine the protocols’ effectiveness, which has an estimated completion date in the latter half of 2016. According to a TSA decision memorandum and its accompanying analysis, TSA uses canine teams and ETD devices at airports as an additional layer of security when Managed Inclusion is operational to determine whether passengers may have interacted with explosives prior to arriving at the airport. In airports with canine teams, passengers must walk past a canine and its handler in an environment where the canine is trained to detect explosive odors and to alert the handler when a passenger has any trace of explosives on his or her person. For example, passengers in the Managed Inclusion lane may be directed to walk from the travel document checker through the passageway and past the canine teams to reach the X-ray belt and the walk-through metal detector. According to TSA documents, the canines, when combined with the other layers of security in the Managed Inclusion process provide effective security. According to TSA, it made this determination by considering the probability of canines detecting explosives on passengers, and then designed the Managed Inclusion process to ensure that passengers would encounter a canine a certain percentage of the time. Our prior work examined data TSA had on its canine program, what these data showed, and to what extent TSA analyzed these data to identify program trends. Further we analyzed the extent to which TSA deployed canine teams using a risk-based approach and determined their effectiveness prior to deployment. As a result of this work, we recommended in January 2013, among other things, that TSA take actions to comprehensively assess the effectiveness of canine teams. The Department of Homeland Security concurred with this recommendation and has taken steps to address it. Specifically, according to TSA canine test results, TSA has conducted work to assess canine teams and to ensure they meet the security effectiveness thresholds TSA established for working in the Managed Inclusion lane, and the canines met these thresholds as a requirement to screen passengers in Managed Inclusion lanes. In those airports where canines are unavailable or not working, TSA uses ETD devices as a layer of security when operating Managed Inclusion. TSOs stationed at the ETD device are to select passengers to have their hands swabbed as they move through the expedited screening lane. TSOs are to wait for a passenger to proceed through the Managed Inclusion queue and approach the device, where the TSO is to swab the passenger’s hands with an ETD pad and place the pad in the ETD device to determine whether any explosive residue is detected on the pad. Once the passenger who was swabbed is cleared, the passenger then proceeds through the lane to the X-ray belt and walk-through metal detector for screening. TSA procedures require FSDs to meet certain performance requirements when ETD devices are operating, such as swabbing passengers at a designated rate, and TSA data from January 1, 2014, through April 1, 2014, show that these requirements were not always met. Beginning in May 2014, TSA’s Office of Security Operations began tracking compliance with the ETD swab requirements and developed and implemented a process to ensure that the requirements are met. In March 2015 TSA officials confirmed this process was still in place. According to TSA, it uses unpredictable screening procedures as an additional layer of security after passengers who are using expedited screening pass through the walk-through metal detector. This random selection of passengers for enhanced screening occurs after they have passed all security layers TSA uses for Managed Inclusion, and provides one more chance for TSA to detect explosives on a passenger. As we reported in December 2014, according to TSA, it designed the Managed Inclusion process using a layered approach to security when providing expedited screening to passengers via Managed Inclusion. Specifically, the Office of Security Capabilities’ proof of concept design noted that the Managed Inclusion process was designed to provide a more rigorous real-time threat assessment layer of security when compared to standard screening or TSA PreTM screening. According to the design concept, this real-time threat assessment, utilizing both BDOs and explosives detection, allows TSA to provide expedited screening to passengers who have not been designated as low risk without decreasing overall security effectiveness. While TSA has tested the security effectiveness of each of these layers of security, TSA has not yet tested the security effectiveness of the overall Managed Inclusion process as it functions as a whole. GAO-14-159, GAO-10-763, GAO-13-239, GAO-14-695T, and GAO-11-740. addressing those recommendations but they have not yet been fully implemented. TSA officials stated that they have not yet tested the security effectiveness of the overall Managed Inclusion process as it functions as a whole, as TSA has been planning for such testing over the course of the last year. TSA documentation shows that the Office of Security Capabilities recommended in January 2013 that TSA test the security effectiveness of Managed Inclusion as a system. We reported in 2014 that according to officials, TSA anticipated that testing would begin in October 2014 and estimated that testing could take 12 to 18 months to complete. In March 2015, TSA officials provided us a schedule for the development and completion of BDO and Canine testing supporting the Managed Inclusion process. TSA scheduled a pilot for testing BDOs which was set to begin October 2014 and run through May 2015. Further, the schedule TSA provided indicates that a proof of concept for Canine Covert Testing was scheduled for November 2014 and that operational testing of canines was scheduled to begin in June 2015 and be completed in March 2016. Testing the security effectiveness of the Managed Inclusion process is consistent with federal policy, which states that agencies should assess program effectiveness and make improvements as needed. We have previously reported on challenges TSA has faced in designing studies and protocols to test the effectiveness of security systems and programs in accordance with established methodological practices. For example, in our March 2014 assessment of TSA’s acquisition of Advanced Imaging Technology, we found that TSA conducted operational and laboratory tests, but did not evaluate the performance of the entire system, which is necessary to ensure that mission needs are met. element of evaluation design is to define purpose and scope, to establish what questions the evaluation will and will not address. See GAO-14-357. We recommend that TSA better measure the effectiveness of its entire AIT system. TSA concurred with the recommendation. and analysis program. For example we found that TSA did not randomly select airports to participate in the study, so the results were not generalizeable across airports. In addition, we found that TSA collected the validation study data unevenly and experienced challenges in collecting an adequate sample size for the randomly selected passengers, facts that might have further affected the representativeness of the findings. According to established evaluation design practices, data collection should be sufficiently free of bias or other significant errors that could lead to inaccurate conclusions. In our December 2014 report we concluded that ensuring the planned effectiveness testing of the Managed Inclusion process adheres to established evaluation design practices would help TSA provide reasonable assurance that the effectiveness testing will yield reliable results. The specific design limitations we identified in TSA’s previous studies of Advanced Imaging Technology and behavior detection and analysis program may or may not be relevant design issues for an assessment of the effectiveness of the Managed Inclusion process, as evaluation design necessarily differs based on the scope and nature of the question being addressed. In general, evaluations are most likely to be successful when key steps are addressed during design, including defining research questions appropriate to the scope of the evaluation, and selecting appropriate measures and study approaches that will permit valid conclusions. As a result, we recommended that to ensure that TSA’s planned testing yields reliable results, the TSA Administrator take steps to ensure that TSA’s planned effectiveness testing of the Managed Inclusion process adheres to established evaluation design practices. DHS concurred with our recommendations and began taking steps to ensure that its planned effectiveness testing of the Managed Inclusion process adheres to established evaluation practices. Specifically, DHS stated that TSA plans to use a test and evaluation process—which calls for the preparation of test and evaluation framework documents including plans, analyses, and a final report describing the test results—for its planned effectiveness testing of Managed Inclusion. Chairman Katko, Ranking Member Rice, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For questions about this statement, please contact Jennifer Grover at (202) 512-7141 or groverj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Glenn Davis (Assistant Director), Brendan Kretzschmar, Ellen Wolfe, David Alexander, Thomas Lombardi, Susan Hsu, Caroline Neidhold, and Serena Epstein. Key contributors for the previous work that this testimony is based on are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2011, TSA began providing expedited screening to selected passengers and has expanded the availability of such screening to increasing numbers of passengers as part of its overall emphasis on risk-based security. Passengers who qualify for expedited screening enjoy varying levels of benefits, including not having to remove their shoes, light outerwear, jackets, belts, liquids, gels and laptops for X-ray screening at airport security checkpoints. By determining passenger risk prior to travel, TSA intended to focus its screening resources on higher-risk passengers while expediting screening for lower-risk passengers. Further, TSA developed the Managed Inclusion process, designed to provide expedited screening to passengers not deemed low risk prior to arriving at the airport. This testimony addresses (1) how TSA assesses the risk of passengers to determine their eligibility to receive expedited screening and (2) the extent to which TSA determined the effectiveness of its Managed Inclusion process. This statement is based on a report GAO issued in December 2014 and selected updates from March 2015. Among other things, GAO analyzed TSA policies and procedures and interviewed TSA security officials. The Transportation Security Administration (TSA) implemented its expedited screening program—known as TSA Pre✓ TM —in 2011.TSA uses the following methods to assess whether a passenger is low risk and therefore eligible for expedited screening. (1) Approved TSA Pre✓ TM lists of known travelers —These lists are comprised of individuals whom TSA has determined to be low risk by virtue of their membership in a specific group, such as active duty military members, or based on group vetting requirements. (2) Automated TSA Pre✓ TM risk assessments of all passengers —Using these assessments, TSA assigns passengers scores based upon information available to TSA to identify low risk passengers eligible for expedited screening for a specific flight prior to the passengers' arrival at the airport. (3) Real-time threat assessments through Managed Inclusion —These assessments use several layers of security, including procedures that randomly select passengers for expedited screening, behavior detection officers who observe passengers to identify high-risk behaviors, and either passenger screening canine teams or explosives trace detection devices to help ensure that passengers selected for expedited screening have not handled explosive material. TSA developed Managed Inclusion as a tool to improve the efficiency of dedicated TSA Pre✓ TM screening lanes as well as to help TSA reach its internal goal of providing expedited screening to at least 25 percent of passengers by the end of calendar year 2013. TSA has tested the effectiveness of individual Managed Inclusion security layers and determined that each layer provides effective security. However, GAO has previously identified challenges in several of the layers used in the Managed Inclusion process, raising concerns regarding their effectiveness. For example, in November 2013, GAO found that TSA had not demonstrated that behavioral indicators can be used to reliably and effectively identify passengers who may pose a threat to aviation security. TSA is taking steps to revise and test the behavior detection program, but the issue remains open. In December 2014, GAO reported that TSA planned to begin testing Managed Inclusion as an overall system in October 2014 and TSA estimated that testing could take 12 to 18 months to complete. GAO has previously reported on challenges TSA has faced in designing studies to test the security effectiveness of other programs in accordance with established methodological practices such as ensuring an adequate sample size or randomly selecting items in a study to ensure the results can be generalizable—key features of established evaluation design practices. In March 2015, TSA officials noted that a pilot for testing behavior detection officers was scheduled to run from October 2014 through May 2015, and testing of canines was scheduled to begin in June 2015 and be completed in March 2016. Ensuring its planned testing of the Managed Inclusion process adheres to established evaluation design practices will help TSA provide reasonable assurance that the testing will yield reliable results. In its December 2014 report, GAO recommended that TSA take steps to ensure and document that its planned testing of the Managed Inclusion process adheres to established evaluation design practices. DHS concurred with GAO's recommendation and is taking action to address it.
In 1996, GSA began a program called “Can’t Beat GSA Leasing” that offered federal agencies the choice of (1) using GSA as their leasing agent, (2) assuming responsibility for their own leasing, or (3) using a combination of both options. The program was an outgrowth of GSA’s commitment to streamline its leasing operations, respond to the government’s changing needs, and address recommendations from client agencies. GSA delegated leasing authority to HHS in September 1996, and HHS redelegated this authority to NIH, one of its four lease-holding agencies, in December 1996. GSA’s original delegation consisted of six conditions, which included the requirements that federal agencies acquire and utilize leased space in accordance with all applicable laws and regulations and —prior to finalizing lease contracts and alterations to leased buildings that exceed a legislatively established threshold—work through GSA to secure an approved prospectus from the appropriate congressional committees. Since 1997, NIH has elected to rely on GSA for some of its leasing needs, but it has issued a majority of its leases on its own. In response to past problems identified in a 2003 internal review of its leases, NIH developed a formal leasing process in 2003 that includes decision points for budget scoring. The process was updated in 2005, and if implemented effectively, it should ensure that NIH leasing complies with OMB’s scorekeeping guidelines for classifying leases. This new process should also ensure that no violations of the Antideficiency Act occur due to improper scorekeeping. The executive and legislative branches formulated the budget scorekeeping rules in connection with the Budget Enforcement Act of 1990. The purpose of these rules is to ensure that scorekeepers adhere to scorekeeping conventions and specific legal requirements when they measure the effects of legislation. They are also used by OMB for determining amounts to be recognized in the budget when an agency signs a contract or enters into a lease. The rules are reviewed annually by the scorekeepers and revised, as necessary, to achieve their purpose. According to scorekeeping guidelines, a lease is classified as either operating or capital, based on six criteria. If a lease meets all six criteria, then it qualifies as an operating lease; otherwise, it must be treated as a capital lease for purposes of budget scoring. For operating leases for agencies other than GSA, budget authority is required for the estimated total payment that is expected to arise under the full term of the contract or, if the contract includes a cancellation clause, for an amount sufficient to cover the lease payments for the first fiscal year plus an amount sufficient to cover the costs associated with cancellation. For GSA operating leases, only the budget authority needed to cover the annual lease payment is required. For a capital lease, budget authority is required for the net present value of the total cost of the lease and property taxes (but not for imputed interest costs and identifiable annual operating expenses). In 2003, NIH’s Assistant Director for the Chief Financial Officer and Central Services asked the Acting Director of the Office of Research Facilities Development and Operations (ORFDO) to certify that all leases were operating leases for the purposes of the annual NIH financial statements, according to an NIH official. To address this request, the ORFDO Acting Director conducted an internal risk assessment. This resulted in a review of all NIH’s leases, which identified problems with implementing budget scorekeeping guidelines and OMB requirements, as well as identifying prospectus leases. More specifically, this review identified eight leases that had been improperly classified as operating leases instead of capital leases and potential unrecorded obligations from 50 active multiyear operating leases that totaled $565 million, as of September 30, 2005. According to the Director of the Office of Acquisitions, NIH’s lease scoring process had been inconsistent and informal from 1996 to 2003, which may explain the improper lease classifications and unrecorded obligations identified in the 2003 review. Due to staff changes at the agency in past years, we were not able to determine why the budget scoring process was inconsistent and informal from 1996 to 2003. In 2003, NIH attempted to address its problems in complying with scorekeeping guidelines by developing the Leasing Management and Oversight Program (LeMOP), a new multistep leasing process that includes budget scoring. LeMOP is a means for NIH to exercise stronger oversight in the leasing process than it had done previously. The process consists of a business case that goes through the following five critical decision points: 1. Initial approval that there is a justifiable need, 2. Approval of a general strategy on how the need will be met, 3. Approval of a detailed strategy for meeting the need, 4. Signing of the lease and obligation of the funds, and 5. Documentation that the agency has reviewed the action and the space requirement is being met. As part of the third decision point, the Office of Business Systems and Finance (OBSF) uses the budget scoring process for leases that it developed during NIH’s 2003 review to conduct an independent test of the planned lease contract against OMB’s budget scorekeeping requirements. The test is based on estimates of the lease rate, the lease term, and other factors. As part of the fourth decision point, OBSF tests the final lease price against OMB scoring requirements to determine if the proposal conforms to applicable rules for budget scoring. In effect, the process establishes a specific requirement that ensures that NIH leases undergo budget scoring. In addition to implementing LeMOP, NIH corrected its improper classification of the eight operating leases identified in the 2003 review. Two of these leases were reclassified as capital leases, and the remaining six were renegotiated by eliminating the option-to-renew clauses from the lease so that they could properly qualify as operating leases. By deleting the option-to-renew clauses, NIH reduced the terms of the leases, which impacted budget scoring. This allowed the leases to meet the scoring criteria for an operating lease—that the present value of the minimum lease payments over the life of the lease not exceed 90 percent of the fair market value of the asset at the beginning of the lease term. Leases exceeding this 90 percent level are to be identified as capital leases. As a final measure in response to the 2003 review, NIH sought HHS’s advice on whether it had $565 million in unrecorded obligations that violated the Antideficiency Act. The unrecorded obligations occurred because NIH scored its operating leases without cancellation clauses similar to GSA—that is, it scored only the budget authority that was needed to cover the annual lease payment. This was done instead of following scorekeeping guidance for an operating lease for agencies other than GSA—that is, budget authority is required for the estimated total payment expected to arise under the full term of the contract or, if the contract includes a cancellation clause, for an amount sufficient to cover the lease payments for the first year plus an amount sufficient to cover the costs associated with the cancellation clause. NIH asked HHS whether it thought that the $565 million in unrecorded obligations was an Antideficiency Act violation. This act states that an officer or employee of the United States is prohibited from making expenditures or incurring obligations in advance of available appropriations unless otherwise authorized by law. HHS stated that it did not believe the potential $565 million in unrecorded obligations from scoring operating leases to be Antideficiency Act violations. We concluded that no Antideficiency Act violation exists. Under the Federal Property and Administrative Services Act of 1949, as amended (Property Act), GSA is authorized to enter into a lease agreement for a term of up to 20 years to accommodate the federal government. In addition, GSA is authorized under the Property Act to delegate to the head of another federal agency most of its authorities, which includes leasing authority. When GSA delegated its leasing authority to the Secretary of HHS, who then redelegated this authority to NIH, the GSA leasing delegation signed by the Administrator specifically stated, “I hereby delegate authority to the heads of all federal agencies to perform all functions related to the leasing of general purpose space for a term of up to 20 years regardless of geographic location.” GSA has specific statutory authority to obligate funds in advance of available appropriations. This authority provides that, when entering into multiyear leases, “the obligation of the amount for a lease is limited to the current fiscal year for which payments are due without regard to the Antideficiency Act.” Accordingly, GSA is directed by law to obligate funds for multiyear leases one year at a time, and it is exempt from the general prohibition in the Antideficiency Act against obligating the government in advance of appropriations for GSA leases. Since GSA delegated all of its leasing authorities through HHS to NIH, the provision in the Property Act relating to the obligation of multiyear leases also applies to NIH. This delegation authorizes NIH to enter into multiyear leases without recording the entire amount of the lease in the first year. Therefore, the fact that NIH entered into multiyear leases without having an appropriation for the entire amount of each lease did not constitute a violation of the Antideficiency Act. GSA is drafting a modification to its guidance for delegated leasing authority which, according to a GSA official, will clarify that agencies with delegated leasing authority can score operating leases in the same manner as GSA does. GSA and agencies with delegated leasing authority are expected to continue to score capital leases according to OMB’s requirements. As part of its leasing process, NIH has established decision points for identifying leases whose costs exceed a legislatively established threshold and for which a prospectus should be submitted for congressional review and approval prior to finalizing contracts. In addition, all alterations to leased buildings are reviewed by an NIH leasing official. These changes should ensure that NIH identifies leases that need to be submitted for review. However, five prior leases that should have been submitted for review still remain unreported. The Public Buildings Act of 1959, as amended, provides for GSA to submit a prospectus for review by the appropriate Senate and House authorizing committees when the cost of a proposed construction, lease, or alteration project exceeds the legislatively established dollar threshold for leases or alterations to leased buildings, which is indexed and revised each year. For an agency with delegated leasing authority, GSA, working in consultation with the agency, prepares a prospectus for any lease involving a net annual rental—excluding services and utilities—in excess of the prospectus threshold. For alterations to leased buildings, the prospectus threshold is one-half of the lease prospectus threshold. After the prospectus is prepared, GSA submits it for approval to the appropriate congressional committees. To address previous problems with inconsistent and informal implementation of prospectus guidance, NIH has incorporated into its leasing process several decision points for identifying any leases for which a congressionally approved prospectus should be submitted. As part of the third decision point of LeMOP—approval of a detailed strategy for meeting the need—OBSF will use the prospectus analysis formula, which it developed during NIH’s 2003 review of all leases, to conduct an independent test of the planned lease contract against the annual prospectus threshold. NIH plans to have GSA issue any lease that exceeds the prospectus limit, with NIH providing the appropriate information and support. As part of the fourth decision point—signing of the lease and obligation of the funds, OBSF will test the final lease price against prospectus thresholds to determine if the proposal conforms to applicable rules for the prospectus process. This prevents the agency from issuing a prospectus-level lease without a congressionally approved prospectus, even though the lease was initially identified as nonprospectus as part of the third decision point. Furthermore, according to NIH’s Office of Acquisitions, ORF, it is now responsible for ensuring that the contracting for alterations to leased buildings does not exceed that prospectus threshold. To prevent any delay in the process, reviews of an alteration project in a leased building begin as early as the concept stage to determine whether a prospectus is required. The Office of Acquisitions also takes into account all other approved alterations to the leased building for the given year to ensure that the total cost of all alterations to that leased building does not exceed the prospectus threshold. As part of the 2003 review of all its leases, NIH identified five leases that had not been sent to the appropriate congressional committees for approval, under the Public Buildings Act of 1959, as amended. GSA is to provide a lease prospectus to the appropriate congressional committees for approval prior to signing a lease. This process involves agencies with delegated leasing authority, which must identify prospectus level leases to GSA for submission. According to the Director of the Office of Acquisitions, NIH had not established a formal prospectus analysis for leases or alterations to leased buildings from 1996 to 2003. As a result, NIH did not notify GSA of the five prior prospectus-level leases that should have been submitted to the committees. While there is no legal penalty for not following the congressional prospectus process, failure to do so hinders the ability of the appropriate congressional committees to fulfill their oversight responsibilities for all prospectus-level leases. Although these five leases have been in effect for several years, GSA officials told us that it would still be appropriate for NIH to work with GSA to notify the committees of their existence. The officials noted, for example, a past instance where GSA reported, after the fact, a prospectus level lease that was issued without approval of the appropriate committees. NIH officials stated that they want to clear up any unresolved issues concerning their prospectus and budget scoring problems. NIH has taken actions to formalize its processes of lease scoring and prospectus analysis by developing and implementing LeMOP, its new leasing process. The specific decision points in this process should address the problems NIH had with consistently complying with OMB’s scorekeeping guidelines and the congressional prospectus process. Because only one lease has been issued under the new process, it is too early to assess the effectiveness of NIH’s implementation of LeMOP. An issue remains with five prior leases that were not submitted to the appropriate congressional committees for review under the Public Buildings Act of 1959, as amended. Although these leases have been in effect for several years, it is nonetheless important that information on them be submitted to the appropriate committees in order to maintain NIH’s accountability to Congress in this area and allow the committees to exercise their oversight responsibilities. NIH has expressed its desire to resolve any remaining issues concerning its prospectus and budget scoring processes. We are recommending that the Director of NIH, using GSA as the proper channel, report to the appropriate congressional committees the five previous NIH prospectus-level leases that did not follow the congressional prospectus process. We provided a draft of this report to NIH, HHS and GSA for comment. In response, HHS provided written comments for itself and NIH. HHS concurred with our findings and recommendation and offered some technical comments that we have incorporated in this report. HHS stated that, as a matter of policy, it does not object to voluntarily complying with the GSA prospectus requirements for the five leases dealt with in the draft report. A letter from HHS commenting on our report is included as appendix I. GSA informed us orally that they had no comments. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, the Director of the National Institutes of Health, the Secretary of Health and Human Services, the Administrator of the General Services Administration and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, this report will be available at no cost on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to those named above, John Finedore, Tom Keightley, Susan Michal-Smith, Chris Bonham and Tamera Dorland made key contributions to this report.
The National Institutes of Health (NIH) is the nation's primary medical and behavioral research agency. NIH's need for leased space has more than doubled since 1996 to about 3.9 million square feet in 2005. In 1996, General Services Administration (GSA) delegated leasing authority to NIH that includes performing budget scoring and prospectus analysis. In light of NIH's increased use of leased space, GAO was asked to address two issues: (1) Is NIH complying with budget scorekeeping guidelines and Office of Management and Budget's (OMB) requirements for implementing the guidelines to determine if a lease should be classified as operating or capital and ensure that no violations of the Antideficiency Act occur because of improper budget scorekeeping? and (2) Is NIH complying with the congressional prospectus process for both leases and alterations to leased buildings? To address these issues we interviewed leasing and financial officials, reviewed laws and reviewed budget scoring and prospectus analysis of 59 leases. NIH has implemented a formal leasing process that, if carried out effectively, should comply with budget scorekeeping guidelines and OMB's requirements for classifying operating and capital leases. This process should ensure that no Antideficiency Act violations occur due to leasing. The agency's new leasing process addresses previous problems with inconsistent and informal implementation of the guidelines and requirements by properly identifying operating and capital leases and properly recording lease obligations for budget scoring purposes. In October 2005, the U. S. Department of Health and Human Services expressed the belief that the potential $565 million in unrecorded obligations from 50 active multiyear NIH leases were not Antideficiency Act violations. We agree that no Antideficiency Act violations exist because the GSA delegation of leasing authority included specific authority that directed NIH to obligate funds for multiyear leases, one year at a time, and that such actions were exempt from the Antideficiency Act. GSA is also modifying its guidance for delegated leasing authority, which would make it clear that agencies with delegated leasing authority can score operating leases in the same manner as GSA. The scoring process for capital leases would remain unchanged. As part of its leasing process, NIH has also established decision points for identifying any leases for which a prospectus should be submitted through GSA for congressional approval, under the Public Buildings Act of 1959, as amended. This process involves submitting leases and alterations to leased buildings for approval whose costs exceed a legislatively established threshold. In addition, NIH has designated the Office of Acquisitions, Office of Resource Facilities to review prospectus-level alterations to leased buildings to ensure that the contracting for alterations to leased buildings does not exceed the prospectus threshold for alterations to leased buildings. However, NIH has taken no action to address five prospectus-level leases that were not submitted to the appropriate congressional committees in past years. While there is no penalty provided in law for not submitting a prospectus, failure to do so hinders the ability of the appropriate congressional committees to fulfill their oversight responsibilities for all prospectus-level leases.
Having the right people with the right skills to successfully manage acquisitions is critical for DOD. The Department spends about $100 billion annually to research, develop, and acquire weapon systems and tens of billions of dollars more for services and information technology. Moreover, this investment is expected to grow substantially. At the same time, DOD, like other agencies, is facing growing public demands for better and more economical delivery of products and services. In addition, the ongoing technological revolution requires a workforce with new knowledge, skills, and abilities. Between 1989 and 1999, DOD downsized its civilian acquisition workforce by almost 50 percent to about 124,000 personnel as of September 30, 1999. These reductions resulted from many DOD actions including the implementation of acquisition reforms, base realignments and closures, and congressional direction. DOD estimates that as many as half of the remaining acquisition personnel could now be eligible to retire by 2005. DOD believed that these actual and projected reductions could be exacerbated by increased competition for technical talent due to a full- employment economy and a shrinking labor pool. As a result of the years of personnel reductions and the increasing competition for replacement talent, DOD concluded that its acquisition workforce was on the verge of a crisis—a retirement-driven talent drain. In responding to this concern, the Under Secretary of Defense for Acquisition, Technology and Logistics (USD/AT&L) created the Acquisition 2005 Taskforce in April 2000 to examine how the acquisition workforce could be reshaped. The Task Force consisted of representatives from the military services, defense agencies, and offices of the secretary of defense supported by contractor teams to help collect information and it sought input from the acquisition community as well as outside experts. The Task Force identified new initiatives as well as existing DOD programs that were considered innovative approaches to recruiting, developing, and retaining its future acquisition workforce. Specifically, the Task Force recommended 31 new initiatives, 8 ongoing initiatives that it believed should continue to be fully supported, and 7 innovative programs that it identified as best practices to be implemented throughout DOD’s acquisition organizations. Reshaping a workforce is challenging for any agency. As we have previously reported, because mission requirements, client demands, technologies, and other environmental influences change rapidly, a performance-based agency must continually monitor its talent needs. It must be alert to the changing characteristics of the labor market. It must identify the best strategies for filling its talent needs through recruiting and hiring and follow up with the appropriate investments to develop and retain the best possible workforce. This includes continuously developing talent through education, training, and opportunities for growth. In addition, agencies must match the right people to the right jobs and, in the face of finite resources, be prepared to employ matrix management principles, maintaining the flexibility to redeploy their human capital and realigning structures and work processes to maximize economy, efficiency and effectiveness. A key to overcoming these challenges is to develop and sustain commitment to a strategic, results-oriented approach to human capital planning—one that incorporates financial management, information technology management, and results-oriented goal-setting and performance measurement. Within high-performing organizations, this begins by establishing a clear set of organizational intents—mission, vision, core values, goals and objectives and strategies—and then integrating human capital strategies to support these strategic and programmatic goals. Taking a strategic approach to human capital planning can be challenging in itself. First, it requires a shift in how the human resource function is perceived, from strictly a support function to one integral to an agency’s mission. Second, agencies may also find that they need some of the basic tools and information to develop strategic plans, such as accurate and complete information on workforce characteristics and strategic planning expertise. DOD’s report to the armed services committees shows progress in its efforts to revitalize the workforce. Specifically, as discussed below, DOD is working to remove barriers to its strategic planning initiatives; continuing with an effort to test various human capital innovations; and beginning to make significant changes to its acquisition workforce-training program. We did not assess the effectiveness of DOD’s initiatives, but they do target some of the root problems hampering the acquisition workforce, and they recognize the substantial challenges involved in adopting a strategic approach to reshaping the workforce. The task force’s first recommendation was to develop and implement human capital strategic planning for the acquisition workforce. DOD recognizes that human capital strategic planning is fundamental to effective overall management. DOD has worked to identify and address problems that have been hampering this effort, which include a lack of accurate, accessible, and current workforce data; mature models to forecast future workforce requirements; a link between DOD’s planning and budgeting processes; and specific planning guidance. As shown in figure 1, DOD recognizes that it will take a considerable amount of time just to lay a good foundation for strategic planning. Part of this long-term effort will involve making a cultural shift—from viewing human capital as a support function to a mission function—as well as developing better data on the work and models to project needs and potential shortfalls. DOD reports that it is establishing a workforce data management strategy to improve the collection and storage of personnel data. The intent is to identify new data requirements and information needs for strategic planning. DOD is also working to develop more sophisticated modeling tools. Such tools are intended to help DOD components to identify gaps between future workforce requirements and the expected workforce inventory—a critical part of the process needed for addressing acquisition workforce size and structure issues such as recruiting, training, and career development. DOD also has taken steps to link its planning effort to its budget process. For example, DOD’s report states that it is developing a new budget exhibit that will identify workforce requirements during the budget process and improve DOD’s ability to fund those requirements. Such actions are intended to enable DOD to identify and obtain the necessary funding to implement programs needed to close the gaps. Though DOD is taking good steps toward developing a strategic plan, it may well find that additional effort is needed to provide planners with effective tools. For example, DOD reports that it has provided more planning guidance, such as the updated Defense Planning Guidance and Quadrennial Defense Review, to help planners identify future workforce requirements during its second strategic planning cycle. However, DOD recognizes that this guidance may not be specific and articulate enough at the operational business unit level to help planners to identify future acquisition workforce requirements. DOD’s report states that the Acquisition Workforce Personnel Demonstration Project is an ongoing initiative that addresses various acquisition workforce size and structure issues. The demonstration project started in February 1999 to experiment with various concepts in workforce management, such as those pertaining to recruiting, hiring, and retention. For example, the demonstration project is testing simplified job announcements by combining information such as the job description, availability, and workforce characteristic requirements into a single document. In another example, the demonstration project is also testing broadbanding concepts that are intended to allow managers to set pay and facilitate pay progression. Broadbanding would allow managers to recruit candidates at differing pay rates and to assign employees within broad job descriptions consistent with the needs of the organization and the skills and abilities of the employee. However, participation in this project is fairly limited. As of September 2001, 5,300 acquisition personnel across DOD are participating in the demonstration project out of a maximum 95,000 personnel allowed by statute. DOD reports that it is aggressively transforming its acquisition training approach to reshape the acquisition workforce and address human capital resource challenges. Specifically, DOD’s Defense Acquisition University (DAU) plans to change its course content and training methods to provide more relevant training to an expected influx of new acquisition personnel hired to replace retiring workers. For example, DAU is working with teams in each career field to revise course content to reflect recent acquisition reforms and eliminate duplicate material. Also, DAU is increasing the number of web-based courses available and opening additional training centers near large acquisition workforce populations to meet the training needs and to reduce the cost of providing this training. In addition, in conjunction with USD/AT&L officials, DAU is establishing partnerships with colleges and professional organizations to reciprocate certification course credits for DOD’s employees as well as for employees from other agencies and the potential employees from the private sector. DAU has recently experienced budget cuts and while DAU officials did not anticipate that the cuts would significantly affect these ongoing initiatives, student throughput may be reduced. The National Defense Authorization Act for Fiscal Year 2002 required DOD to summarize its actions and plans to implement the task force’s recommendations. For each of the Task Force’s recommendations, DOD was to specifically include a summary of actions taken and specific milestones and dates for completion. DOD was also to provide reasons for not implementing recommendations, any planned alternate initiatives to the recommendations, and any additional planned initiatives. DOD’s report generally provides this information—not just for the 31 recommendations but also for two additional DOD initiatives. The report provides detailed information about some of DOD’s actions, such as human capital strategic planning and the demonstration project, as described in more detail in appendix I. However, information on other actions was unclear or incomplete. Specifically: DOD’s report does not clearly present the overall status of the 31 new recommendations. DOD’s report states that 14 recommendations are “in implementation”, 14 recommendations have been “merged” into follow-on strategies, and 3 recommendations will not be pursued. Although our review did show that DOD did not plan to pursue these 3 recommendations, we concluded, as shown in appendix I, that actions addressing 24 recommendations are in the process of being implemented, and 4 are actually completed. DOD’s report does not consistently provide enough information about actions taken on some recommendations. For example, the report cites two actions as addressing the recommendation to “Provide More Career- Broadening Opportunities”: “DCMA– Professional Enhance Program” and “DISA–Executive Development Leadership Program.” The report provided no information about the objectives or scope of these programs and as a result, it is unclear how these programs relate to providing career- broadening opportunities. The report did not address at all the best practices identified by the Task Force and does not always identify when actions for ongoing initiatives are to be accomplished or reasons for not implementing them. For example, the report identified actions taken for the ongoing initiative to establish career development plans but provided no schedule for when these plans are to be completed. In another example, the report stated that DOD is not pursuing the development of legislation for a phased retirement program that was identified as an ongoing initiative by the Task Force. The report offered no explanation for DOD’s not pursuing the legislation. DOD’s report also does not consistently provide the dates for actions taken or scheduled milestones for completing implementation of the recommendations. For example, the report provides milestones and schedules for each of the overarching strategies, but these milestones and schedules are difficult to correlate with the individual Task Force recommendations that were grouped into the strategies. Further, the report does not identify the future milestones required for completing implementation of the recommendations it is pursuing, as required by the congressional mandate. In addition to addressing specific recommendations, DOD also concluded that it needed to group the recommendations into broader strategies, or functional areas, so that it would have a framework for coordinating component efforts and targeting future initiatives. The areas include (1) career development, (2) certification, (3) hiring, and (4) marketing, recruiting, and retention. DOD has established metrics to measure the impact of actions being taken to address acquisition workforce challenges. DOD also has designated its office of acquisition education, training and career development to collect data on the actions taken and assess progress in addressing the challenges identified by the Task Force and the USD/AT&L goal to revitalize the quality and morale of its workforce. We received comments via email from DOD. DOD generally agreed with our findings and provided technical comments and an update on the status of DOD’s establishing and collecting metrics on the initiatives. We incorporated these comments where appropriate. To assess the extent that DOD’s report to the committees addressed the Acquisition 2005 Task Force’s concerns about the size and structure of the workforce, we reviewed the Task Force’s report “Shaping the Civilian Acquisition Workforce of the Future.” We interviewed officials from offices of the under secretaries of defense for acquisition, technology and logistics, and personnel and readiness; headquarters offices of the military services; and other officials representing the defense agencies. We interviewed these officials to (1) obtain their views about acquisition workforce size and structure issues identified by the Task Force, and (2) determine the processes that the services and agencies used to identify actions being taken within their organizations to address those issues. In addition, we obtained relevant documents and interviewed DOD and contractor officials involved in DOD’s strategic planning efforts and DOD’s Acquisition Workforce Personnel Demonstration Project. Finally, we reviewed the DOD report to determine whether it addressed the Task Force’s concerns and contained information consistent with that we obtained during discussions with DOD and contractor officials. To assess the extent that DOD’s report summarizes DOD’s actions and plans to implement the Task Force’s recommendations, we reviewed the DOD report to ascertain whether the report clearly (1) summarized DOD’s actions taken to address the Task Force recommendations, (2) identified milestones to be achieved and the schedule for achieving them, and (3) described how DOD would manage, oversee, and evaluate its efforts to address the Task Force’s concerns. We also compared the report’s information with information that we obtained during discussions with DOD and contractor officials responsible for the acquisition workforce strategic planning effort and the Acquisition Workforce Personnel Demonstration Project. We did not independently identify or verify all actions that DOD reported it has taken to address the Task Force recommendations. We conducted our review between December 2001 and April 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested congressional committees and the secretary of defense; the director, Office of Management and Budget; and the director, Office of Personnel Management. We will also make copies available to others upon request. The report will also be available on GAO’s home page at http://www.gao.gov. Please contact me at (202) 512-4125 or Hilary C. Sullivan at (214) 777-5652 if you have any questions. Major contributors to this report were Frederick G. Day, Michael L. Gorin, Rosa M. Johnson, and Suzanne Sterling. In implementation Grouped into follow-on strategy “Hiring” First drafts or cycle of strategic plans submitted from DOD Components.Office of the Under Secretary of Defense (AT&L) memorandum initiated second cycle of Human Capital Strategic Plans. Detailed workforce guidance provided to DOD Components. Second drafts of strategic plans are scheduled for completion. The DOD Components are continually involved in improving the hiring process through expanded use of existing authorities and reengineered processes. A web-based Acquisition Manager’s Recruiting, Hiring, and Retention Handbook was developed and is widely available as a quick reference tool (see #14). The Office of the Deputy Assistant Secretary of Defense, Civilian Personnel Policy issued pay-setting guidance through an updated manual and interim policy memoranda. Compensation flexibilities and opportunities available to managers are also included in published Functional Manager’s Recruiting, Hiring, and Retention Handbook (see #14). The DOD Components are restructuring job announcements and marketing positions more broadly. OUSD (AT&L) is contracting with a commercial firm to benchmark marketing and recruiting programs that will be used to structure a DOD program to attract and hire top quality people into AT&L workforce. A pilot program will focus on a specific career field/portion of the workforce and expand the successes into other areas of the workforce. The effort will produce an assessment report and target a pilot program. The military services are already using these programs and expanding to new locations. Many defense agencies also use student employment programs to recruit and hire college students. The DOD Components have budgeted for increased use of SEEP. In implementation Grouped into follow-on strategy “Marketing, Recruiting, & Retention” OUSD (AT&L) established new oversight mechanism to help ensure more timely training. Increase distance learning (DL)/web-based learning opportunities. Expand course equivalencies. Functional advisors chartered to advise DAU on training management. Full implementation of online registration capability. All the Directors of the Acquisition Career Management (DACM’s) are actively involved in training resources and training quota management as a primary function of their offices. DOD reported actions taken or future milestonesServices have well-established career broadening programs. Army/OUSD(AT&L) Rotational Program will be established. Defense Contract Management Agency (DCMA) Professional Enhance Program. The Defense Information Systems Agency (DISA) – Executive Development Leadership Program. Handbook published (see #14) to encourage use of existing authorities for employee incentive programs. DOD Tele-work policy published that requires DOD Components to offer tele-work to 25 percent of the eligible civilian workforce the first year of implementation and to increase that by 25 percent in each of the subsequent 3 years. DOD also encourages use of other flexibilities such as leave sharing, sick leave to care for family members, and job sharing. DOD’s Components provide health and wellness programs, employee assistance and family advocacy programs, alternative dispute resolution, and other programs designed to increase the morale and productivity of the civilian workforce. Expand distance learning opportunities. Conduct workforce survey to determine workforce enhancements. OUSD (AT&L) has contracted with a leading firm to benchmark and evaluate recruiting programs and develop marketing, recruiting, and hiring strategies to attract and hire top quality people. A pilot program is planned that will focus on a specific career field or portion of the workforce and expand into other areas of the workforce. The DOD Components have a number of ongoing outreach programs, especially to colleges and universities. OUSD (AT&L) has contracted to benchmark and develop marketing, recruiting, and hiring strategies (see action under #10). OUSD (AT&L) is developing an initiative to partner AT&L functional communities with colleges and universities in order to facilitate more long-term relationships. Reason not to implement The DOD Components consider intern programs more cost effective than scholarship program. DOD reported actions taken or future milestonesThe DOD Director of Acquisition Career Management (DACM) maintains a public domain website to invite interest in DOD OUSD (AT&L ) workforce policies, programs, and job opportunities. The DOD DACM is currently revising the CRS to include moving to a totally web-based systems and expanding the types of job announcements. DOD anticipates results from the Marketing and Recruiting follow- on strategy will provide guidance on how to best market DOD OUSD (AT&L) vacancies to the private sector. Based on both of these efforts, DOD will either expand the web- based Central Referral System (CRS) to include the remaining segments or pursue a combined solution that also addresses the private sector. The handbook was published and is available through the OUSD (AT&L) website. GAO assessment of current Status In implementation Grouped into follow-on strategy “Marketing, Recruiting, & Retention” The DOD Components embrace continuous improvement, have metrics to measure the time required to fill vacancies, and are actively engaged in reengineering, such as the Army’s Staffing Processes Reengineering and Innovations Group, the Air Force’s PALACE Compass Reengineering and Development Division, and the Navy’s Human Resource (HR) Re-engineering Functional Assessment. OUSD (AT&L) is planning to conduct a job competitiveness survey with the private sector. It will develop an action plan to address the problem areas identified in the survey. The plan may include requests to the Office of Personnel Management (OPM) for special salary rate considerations. In implementation Grouped into follow-on strategy “Hiring” DOD has researched Computer Adaptive Testing technology to determine if there is a valid methodology by which personnel can become certified by test(s). OUSD (AT&L) is working with the career field Functional Advisor Executive Secretaries to establish partnerships with private sector entities that grant professional certifications and will compare competencies. This may lead to equivalency agreements. DAU is also moving forward with DL/web-based training and the formation of partnerships with professional associations and universities, which can be leveraged to address this issue. In implementation Grouped into follow-on strategy “Marketing, Recruiting, & Retention” In implementation Grouped into follow-on strategy “Certification” DOD reported actions taken or future milestonesOPM delegated to federal agencies the authority to rehire Federal retirees without financial offset. However, this authority remains in effect only for the period of national emergency and pertains only to temporary requirements directly related to or affected by the attacks of September 11, 2001. Further action will be required for DOD to obtain authority to rehire Federal retirees without financial offset permanently. OSD runs the Defense Leadership and Management Program (DLAMP), which provides multi-functional career opportunities for highly qualified future leaders throughout DOD. Career broadening programs within the DOD Components also all have multi-functional attributes to them (see #7). GAO assessment of current Status In implementation Grouped into follow-on strategy “Hiring” The Services have studies, programs, projects, or processes underway to provide recommendations for the development and training of their future leaders. The Services also conduct head to head military/civilian competition for leadership positions. The Services have leadership training opportunities. RAND published report, through contract with OUSD (P&R), identifying the effect of the Federal Employee Retirement System (FERS) on the recruitment and retention of federal employees compared to the Civil Service Retirement System (CSRS). A memorandum from OUSD (P&R) discontinued high-grade controls so no action required by the DOD Components. In implementation Grouped into follow-on strategy “Career Development” In implementation Grouped into follow-on strategy “Career Development” The DOD Components have constructed models that capture metrics on lapse rates. The DOD Components are using metrics to reduce the time to hire employees and, in turn, reduce the lapse rates. The Army and Air Force already have their own survey programs, so OSD opted to postpone any decisions on how to proceed until the services had ample time to test their own programs. OSD will then determine the best course of action and develop additional surveys as needed. OUSD (AT&L) Knowledge System Communities of Practice will include a program for sharing best practices in the following areas: Program Management, Contract Finance, and Acquisition Logistics. DOD reported actions taken or future milestonesLegislation is under consideration. OUSD(AT&L) hosted a forum with the private industry representatives designed to flush out industry concerns. Using information from the forum, OUSD(AT&L) chaired a working group with representation from the services and DOD agencies to develop a draft program directive and draft instruction. A legislative proposal is being staffed to enable this program. The DOD Components encourage mobility through rotational assignments, long term planning, payment of Permanent Change of Station (PCS) expenses, and civilian spouse placement programs. Legislation is under development to facilitate greater job mobility. A process is in place to assess DOD acquisition personnel management authorities. OUSD(P&R) is developing a strategic personnel management plan that addresses the need for additional personnel management flexibilities. The DOD Components are also preparing annual human resource performance plans that will identify areas of need within personnel policy and practices. DOD is expecting to conduct a thorough review of personnel systems in order to prepare for transition to alternative personnel system. Legislation pending to increase personnel management flexibilities. FY03 Reason not to implement. Analysis showed that many employees would not find it in their best interest to use this authority. DOD also determined that employees wishing to gain extra retirement income can do so through annuities available in the marketplace. Reason not to implement. Since DOD Components already encourage return home visits, roundtrip travel for spouse is more costly. DOD will not pursue legislatively authority. Much of the hiring processes within DOD have already been automated. Are currently reviewing their individual hiring processes to streamline and find additional automation opportunities where appropriate. As of September 30, 2001, approximately 5,300 employees across DOD were participating in the project to improve the quality and morale of the AT&L workforce as well as the management of it. The Principal Deputy Under Secretary of Defense (AT&L) chartered SES-level experts to serve as Functional Advisors (Fas) for each AT&L career field to act as subject matter experts on the qualifications and career development requirements for their assigned career fields. OPM established higher pay rates for new and currently employed computer specialists, computer engineers, and computer scientists at grades GS-5, 7, 9, 11, and 12. 4. Establish special pay rates for information technology specialists 5. Increase bonus ceilings The Managerial Flexibility Act of 2001 provided authority to pay larger recruitment and relocation bonuses based on the length of an agreed-upon period. Authority was granted. The FY 2002 National Defense Authorization Act includes VERA and VSIP authority for workplace restructuring in FY 2002 and FY 2003. The Administration has introduced legislation to make authority permanently available throughout the federal workforce. DOD is not pursuing legislation allowing this program. Not provided. Not provided. Not provided. DOD reported actions taken or future milestonesNot provided. Not provided. Not provided. Not provided.
The Department of Defense (DOD) downsized its acquisition workforce by half in the past decade. It now faces serious imbalances in the skills and experience of its remaining workforce and the potential loss of highly specialized knowledge if many of its acquisition specialists retire. DOD created the Acquisition 2005 Task Force to study its civilian acquisition workforce and develop a strategy to replenish personnel losses. In response to a legislative mandate, DOD reported on its plans to implement the task force's recommendations as required by the National Defense Authorization Act for Fiscal Year 2002. DOD's report shows that it has made progress in reshaping its acquisition workforce. For example, DOD is working to remove barriers to its strategic planning initiative; continuing to test various human capital innovations; and has begun making significant changes to its acquisition workforce-training program. DOD's report provides information on implementation of the task force's recommendations and their status. However, for many initiatives, DOD did not clearly describe the actions taken or when they occurred, nor did it identify all planned actions and schedules for completing the initiatives.
CBP’s SBI program is responsible for deploying SBInet (e.g., sensors, cameras, radars, communications systems, and mounted laptop computers for agent vehicles), and tactical infrastructure (e.g., pedestrian and vehicle fencing, roads, and lighting) that are intended to enable CBP agents and officers to gain effective control of U.S. borders. SBInet technology is intended to include the development and deployment of a common operating picture (COP) that provides data through a command center to Border Patrol agents in the field and potentially to all DHS agencies and to be interoperable with stakeholders external to DHS, such as local law enforcement. The current focus of the SBI program is on the southwest border areas between the ports of entry that CBP has designated as having the highest need for enhanced border security because of serious vulnerabilities. The SBI program office and its offices of SBInet and tactical infrastructure are responsible for overall program implementation and oversight. In September 2006, CBP awarded a prime contract to the Boeing Company for 3 years, with three additional 1-year options. As the prime contractor, Boeing is responsible for acquiring, deploying, and sustaining selected SBI technology and tactical infrastructure projects. In this way, Boeing has extensive involvement in the SBI program-requirements development, design, production, integration, testing, and maintenance and support of SBI projects. Moreover, Boeing is responsible for selecting and managing a team of subcontractors that provide individual components for Boeing to integrate into the SBInet system. The SBInet contract is largely performance-based—that is, CBP has set requirements for the project and Boeing and CBP coordinate and collaborate to develop solutions to meet these requirements—and designed to maximize the use of commercial off- the-shelf technology. CBP’s SBI program office oversees and manages the Boeing-led SBI contractor team. CBP is executing part of SBI activities through a series of task orders to Boeing for individual projects. As of September 5, 2008, CBP had awarded 11 task orders to Boeing for a total amount of $933.3 million. Table 1 is a summary of the task orders awarded to Boeing for SBI projects. In addition to deploying technology across the southwest border, the SBI program office plans to deploy 370 miles of single-layer pedestrian fencing and 300 miles of vehicle fencing by December 31, 2008. Pedestrian fencing is designed to prevent people on foot from crossing the border and vehicle fencing consists of physical barriers meant to stop the entry of vehicles. Figure 2 shows examples of SBI fencing styles along the southwest border. The SBI program office, through the tactical infrastructure program, is using USACE to contract for fencing and supporting infrastructure (such as lights and roads), complete required environmental assessments, and acquire necessary real estate. In June 2008, CBP awarded Boeing a supply and supply chain management task order for the purchase of construction items, such as steel. Since fiscal year 2006, Congress has appropriated more than $2.7 billion for SBI. Table 2 shows SBI obligations from fiscal years 2006 through 2008 for SBInet technology, tactical infrastructure, and program management. DHS has requested an additional $775 million for SBI for fiscal year 2009. SBInet technology deployments continue to experience delays and, as a result, Border Patrol agents have to rely upon existing limited technological capabilities to help achieve effective control of the border. We reported in October 2007, that SBI program office officials expected to complete all of the first planned deployment of technology projects in the Tucson, Yuma, and El Paso sectors by the end of 2008. In February 2008, we reported that the first planned deployment of technology would occur in two geographic areas within the Tucson sector—known as Tucson-1 and Ajo-1—by the end of calendar year 2008, with the remainder of deployments to the Tucson, Yuma, and El Paso sectors scheduled to be completed by the end of calendar year 2011. In July 2008, SBI program office officials reported that SBInet technology deployments to Tucson-1 and Ajo-1 would be completed sometime in 2009. These officials further noted that SBInet technology deployments in the Tucson, Yuma, and El Paso sectors had also been delayed. SBInet program uncertainties contribute to ongoing delays of SBInet technology deployments. These include: SBInet technology will be deployed to fewer sites than originally planned by the end of 2008; is expected to have fewer capabilities than originally planned at that time; and as discussed above, the SBInet program office does not have specific deployment dates; SBInet planning documents and mechanisms, such as the integrated master schedule, have not received executive approval and are constantly changing. For example, the current (unapproved) schedule is out of date and under revision; and The SBInet program office has not effectively defined and managed program expectations, including specific project requirements. The need to obtain environmental permits is also contributing to the initial Tucson deployment delays. According to DOI officials, DHS officials initially stated that the DHS authority to waive all legal requirements as necessary to ensure expeditious construction covered both SBInet technology and tactical infrastructure projects. However, DHS officials later determined that the Secretary’s April 1, 2008, waiver did not extend to the Tucson-1 and Ajo-1 SBInet projects. Without waiver coverage for these projects, DHS must conform to the National Environmental Policy Act, which requires federal agencies to evaluate the likely environmental effects of projects they are proposing using an environmental assessment or, if the projects likely would significantly affect the environment, a more detailed environmental impact statement. According to DOI officials, SBI program office officials had planned to submit the permit application for the Tucson-1 project area in February 2008, requesting access and permission to build on environmentally sensitive lands. SBI officials said that they had been working with DOI local land managers; however, due to confusion over the DHS waiver authority, the complete application for the tower construction sites was submitted on July 10, 2008, while the SBI program office had planned to begin construction for Tucson-1 on July 15, 2008. According to DOI officials, the approval process normally takes 2 to 3 months, but they have expedited the DHS permit and plan to resolve the application in mid-September 2008. Given the delays with SBInet technology deployment, Border Patrol agents continue to rely upon existing technologies. The cameras and sensors in use predate SBInet technology and do not have the capabilities that SBInet technology is to provide. In addition, some of the equipment currently in use may be outdated. For example, in the Border Patrol’s El Paso sector, aging cameras and sensors do not work in inclement weather and do not always function at night. In the Tucson sector, Border Patrol agents are using capabilities provided by Project 28, the SBInet prototype that was accepted by the government in February 2008. We previously reported that Project 28 encountered performance shortfalls and delays. Despite these performance shortfalls, agents in the Tucson Sector continue to use Project 28 technology capabilities while waiting for the SBInet technology deployment. During our visit to the Tucson Sector in June 2008, Border Patrol agents told us that the system had improved their operational capabilities, but that they must work around ongoing problems, such as finding good signal strength for the wireless network, remotely controlling cameras, and modifying radar sensitivity. Moreover, during our visit we observed the agents’ difficulties in logging on to the wireless network and maintaining the connection from the vehicle-mounted mobile data terminal. Project 28 is the only available technology in the Tucson-1 project area of the Tucson sector, compared to the Ajo-1 project area, which does not have any technology. Further delays of SBInet technology deployments may hinder the Border Patrol’s efforts to secure the border. The deployment of tactical infrastructure projects along the southwest border is ongoing, but costs are increasing, the life-cycle cost is not yet known, and land acquisition issues pose challenges to DHS in meeting the goal it set, as required by law, to complete 670 miles of fencing—370 miles of pedestrian fence and 300 miles of vehicle fence, by December 31, 2008. We previously reported that as of February 21, 2008, the SBI program office had constructed 168 miles of pedestrian fence and 135 miles of vehicle fence. See figure 3 for photographs of SBI tactical infrastructure projects in Arizona and New Mexico. Approximately 6 months later, the SBI program office reports that 19 additional miles of pedestrian fence and 19 additional miles of vehicle fence have been constructed as of August 22, 2008 (see table 3). Although SBI program office and USACE officials stated that they plan to meet the December deadline, factors such as a short supply of labor and materials, and the compressed timeline affect costs. SBI program office officials said that beginning in July 2008, as they were in the process of finalizing construction contracts, cost estimates for pedestrian fencing in Texas began to increase. According to USACE officials, as of August 28, 2008, fencing costs average $7.5 million per mile for pedestrian fencing and $2.8 million per mile for vehicle fencing, up from estimates in February 2008 of $4 million and $2 million per mile, respectively. SBI program office officials attributed the cost increases to a short supply of both labor and materials as well as the compressed timeline. For example, they said that as a result of a construction boom in Texas, labor is in short supply and contractors report that they must provide premium pay and overtime to attract workers. In terms of materials, USACE officials stated the price of cement and steel have increased and in some areas within Texas obtaining cement near the construction site is difficult. For example, contractors are now procuring cement from Colorado, and aggregate, a cement mixing agent, from Houston, Texas. The SBI program office officials also said that increasing fuel costs for transporting steel and cement were contributing factors. Officials said they are working to mitigate the cost increases where possible, for example, through their bulk purchase of steel and their negotiations in one county where premium labor rates were higher than usual. The SBI program office officials said that the compressed construction timeline also contributes to the cost increase, particularly in terms of labor costs. The SBI program office does not yet have an estimated life-cycle cost for fencing because maintenance costs are unknown and the SBI program office has not identified locations for fencing construction projects beyond December 2008. The fiscal year 2008 Consolidated Appropriations Act required DHS to submit to the House and Senate Appropriations Committees an expenditure plan for the SBI program that included, among other things, a life-cycle cost estimate. However, the plan did not include the estimate. In a June 2008 response to an inquiry from the Chairman of the House Appropriations Subcommittee on Homeland Security regarding several deficiencies in the plan, the Secretary of Homeland Security stated that because Border Patrol agents have traditionally repaired damaged fencing themselves, DHS does not have historical cost data on fence repair by contractors on which to estimate life-cycle fence costs. However, according to the letter, DHS is currently collecting information on maintenance costs and by early calendar year 2009 plans to have a life- cycle cost estimate. In the near term, the department requested $75 million for operations and maintenance of tactical infrastructure in fiscal year 2009, according to the letter. In addition, Border Patrol officials have identified additional segments of the southwest border for construction of pedestrian and vehicle fencing beyond December 2008 and SBI program office and Border Patrol stated that they are developing fencing project priorities for 2009. However, they have not yet established a timeline for construction, and sources of funding have not been determined. Land acquisition issues such as identifying landowners and negotiating land purchases present a challenge to completing fence construction by December 31, 2008. According to SBI program office officials, in order to adhere to this timeline, all fencing construction projects must be underway by September 30, 2008. However, according to SBI program office officials, as of August 26, 2008, an estimated 320 properties remain to be acquired from landowners. USACE officials noted that completion of fencing construction projects usually take 90 to 120 days and the December 31, 2008 deadline, is in jeopardy if ongoing litigation related to land acquisition is not resolved by September 30, 2008 (see table 4). Of the 122 landowners who have refused to sell, 97 are within the Rio Grande Valley sector. As of August 28, 2008, of these 97 landowners, 20 are defendants in lawsuits filed by the Department of Justice at the request of the Secretary of Homeland Security for the condemnation and taking of their property. According to USACE officials, the 20 lawsuits were filed in July 2008 and are awaiting an order of possession ruling expected sometime in September 2008. Subsequent lawsuits were filed against the remaining 77 landowners, but court dates have not been set. As of September 2008, the SBI program office was reevaluating its staffing goal, and the SBI program office continued to take steps to implement the December 2007 Human Capital Plan. In February 2008, we reported that the SBI program office had established a staffing goal of 470 employees for fiscal year 2008. As of August 1, 2008, the SBI program office reported having 129 government staff and 164 contractor support staff for a total of 293 employees (see table 5). SBI program office officials stated that a reorganization of the SBI program office and project delays have resulted in a need for fewer staff during fiscal year 2008. The officials further noted they plan to continue to evaluate the expected staffing needs through the end of fiscal year 2009. The SBI program office published the first version of its Strategic Human Capital Management Plan in December 2007, and as of September 2008, continued to implement the plan. The SBI program office’s plan outlines seven main goals for the office and includes planned activities to accomplish those goals, which align with federal government best practices. As of September, 2008, the SBI program office had taken several steps to implement the plan. For example, the SBI program office held a meeting on September 2, 2008, to develop SBI’s mission, visionary goals and objectives, and core values, and the office has recruitment efforts under way to fill open positions. However, in other areas, the SBI program office is in the process of drafting or has drafted documents, such as the SBI Value Statement, the SBI Awards and Recognition Plan, and the Succession Management Plan, which have yet to be approved and acted upon. Table 6 summarizes the seven human capital goals, the SBI program office’s planned activities, and steps taken to accomplish these activities. We have previously reported that a properly designed and implemented human capital program can contribute to achieving an agency’s mission and strategic goals. Until the SBI program office fully implements its plan, it will lack a baseline and metrics by which to judge the human capital aspects of the program. The SBI program continues to face challenges that include delays in project implementation and cost increases. The delays and cost uncertainties could affect DHS’s ability to meet projected completion dates, expected costs, and performance goals. Border Patrol agents continue to rely upon existing limited technological capabilities as SBInet technology deployments delays persist, and this may hinder the Border Patrol’s efforts to secure the border. In the tactical infrastructure area, meeting the Secretary’s goal to build 670 miles of fencing by December 31, 2008, a goal that DHS was required by law to set for itself, continues to be challenging. Since our last report to you 6 months ago, 38 miles of fence have been built and 329 are to be constructed during the next 4 months – provided that land acquisition issues can be resolved. Furthermore, tactical infrastructure costs are increasing and the SBI program office has not yet determined a life-cycle cost for fencing because maintenance costs are unknown and the SBI program office has not identified the locations for fencing construction projects beyond December 31, 2008; therefore, the total cost for building and maintaining fences along the southwest border is not yet known. These issues underscore Congress’s need to stay closely attuned to DHS’s progress to ensure that schedule and cost estimates stabilize, and the program efficiently and effectively addresses the nation’s border security needs. This concludes my prepared testimony. I would be pleased to respond to any questions that members of the committee may have. For questions regarding this testimony, please call Richard M. Stana at (202) 512-8777 or stanar@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, Susan Quinlan, Assistant Director, and Jeanette Espínola, Analyst-in-Charge, managed this assignment. Sylvia Bascopé, Burns Chamberlain, Katherine Davis, Jeremy Rothgerber, and Erin Smith made significant contributions to the work. Secure Border Initiative: Fiscal Year 2008 Expenditure Plan Shows Improvement, but Deficiencies Limit Congressional Oversight and DHS Accountability. GAO-08-739R. Washington, D.C.: June 26, 2008. Department of Homeland Security: Better Planning and Oversight Needed to Improve Complex Service Acquisition Outcomes. GAO-08-765T. Washington, D.C.: May 8, 2008. Department of Homeland Security: Better Planning and Assessment Needed to Improve Outcomes for Complex Service Acquisitions. GAO-08-263. Washington, D.C.: April 22, 2008. Secure Border Initiative: Observations on the Importance of Applying Lessons Learned to Future Projects. GAO-08-508T. Washington, D.C.: February 27, 2008. Secure Border Initiative: Observations on Selected Aspects of SBInet Program Implementation. GAO-08-131T. Washington, D.C.: October 24, 2007. Secure Border Initiative: SBInet Planning and Management Improvements Needed to Control Risks. GAO-07-504T. Washington, D.C.: February 27, 2007. Secure Border Initiative: SBInet Expenditure Plan Needs to Better Support Oversight and Accountability. GAO-07-309. Washington, D.C.: February 15, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In November 2005, the Department of Homeland Security (DHS) established the Secure Border Initiative (SBI), a multiyear, multibillion-dollar program to secure U.S. borders. One element of SBI is the U.S. Customs and Border Protection's (CBP) SBI program, which is responsible for developing a comprehensive border protection system through a mix of surveillance and communication technologies known as SBInet (e.g., radars, sensors, cameras, and satellite phones), and tactical infrastructure (e.g., fencing). The House Committee on Homeland Security and its Subcommittee on Management, Investigations, and Oversight asked GAO to monitor DHS progress in implementing CBP's SBI program. This testimony provides GAO's observations on (1) technology deployment; (2) infrastructure deployment; and (3) how the CBP SBI program office has defined its human capital goals and the progress it has made to achieve these goals. GAO's observations are based on prior and new work, including analysis of DHS documentation, such as program schedules, contracts, and status reports. GAO also conducted interviews with DHS and Department of the Interior officials and contractors, and visits to sites on the southwest border where SBI deployment is under way. GAO performed the work from March to September 2008. DHS generally agreed with GAO's findings. SBInet technology deployments continue to experience delays and, as a result, Border Patrol agents have to rely upon existing limited technological capabilities to help achieve control of the border. SBI program officials had originally planned to deploy SBInet technology across the southwest border by the end of 2008, but in February 2008 this date had slipped to 2011. In July 2008, officials reported that two initial projects that had been scheduled to be completed by the end of calendar year 2008 would be finished sometime in 2009. SBInet program uncertainties, such as not fully defined program expectations, changes to timelines, and confusion over the need to obtain environmental permits contribute to ongoing delays of SBInet technology deployments. Due to the delays, Border Patrol agents continue to use existing technology that predates SBInet, and in the Tucson, Arizona, area they are using capabilities from SBInet's prototype system despite previously reported performance shortfalls. Further delays of SBInet technology deployments may hinder the Border Patrol's efforts to secure the border. The deployment of fencing is ongoing, but costs are increasing, the life-cycle cost is not yet known, and meeting DHS's statutorily required goal to have 670 miles of fencing in place by December 31, 2008, will be challenging. As of August 22, 2008, the SBI program office reported that it had constructed a total of 341 miles of fencing, and program officials stated that they plan to meet the December 2008 deadline. However, project costs are increasing and various factors pose challenges to meeting this deadline, such as a short supply of labor and land acquisition issues. According to program officials, as of August 2008, fencing costs averaged $7.5 million per mile for pedestrian fencing and $2.8 million per mile for vehicle fencing, up from estimates in February 2008 of $4 million and $2 million per mile, respectively. Furthermore, the life-cycle cost is not yet known, in part because of increasing construction costs and because the program office has yet to determine maintenance costs and locations for fencing projects beyond December 2008. In addition, land acquisition issues present a challenge to completing fence construction. As of September 2008, the SBI program office was reevaluating its staffing goal and continued to take actions to implement its human capital plan. In February 2008, we reported that the SBI program office had established a staffing goal of 470 employees for fiscal year 2008. As of August 1, 2008, the SBI program office reported having 129 government staff and 164 contractor support staff for a total of 293 employees. Program officials stated that a reorganization of the SBI program office and SBInet project delays have resulted in fewer staffing needs and that they plan to continue to evaluate these needs. The SBI program office also continued to take steps to implement its human capital plan. For example, recruitment efforts are under way to fill open positions. However, the SBI program office is in the process of drafting or has drafted documents, such as the Succession Management Plan, that have yet to be approved or put into action.
FERC now issues few licenses to construct and operate new hydropower projects. Therefore, most of FERC’s licensing activities relate to the relicensing of projects with licenses currently nearing their expiration dates. FERC recognizes two licensing processes—a traditional process and an alternative process. In addition, some licensees use a combination of the two processes—informally referred to as a “hybrid” process. All three processes begin between 5 and 5-½ years before a project’s license expires, when the licensee notifies FERC of its intent to seek relicensing. Each process ends when FERC either issues a new license or denies the license application. However, FPA provides for subsequent administrative and judicial reviews of a FERC license decision. If a license expires while a project is undergoing relicensing, FERC issues an annual license, allowing a project to continue to operate under the conditions found in the original license until the relicensing process is complete. Currently, more than 60 projects are operating under annual licenses, including several that have been operating under annual licenses for over a decade. FERC’s newly issued licenses include a standard “reserved authority” that allows FERC to “reopen” a license to modify its terms and conditions to meet fish and wildlife needs. New licenses may also include “reopener articles” that allow federal and state agencies, nongovernmental organizations, and individuals to petition FERC to reopen a license for other issues, including minimum streamflows and water quality. Federal fish and wildlife agencies may also ask FERC to reconsider the impacts of a project when an affected species is listed as endangered or threatened under the Endangered Species Act. FERC divides the traditional licensing process into two phases—a pre- application consultation phase and a post-application analysis phase. Each phase consists of stages and individual steps defined by “windows of time” rather than by specific dates. (See app. I.) For example, FERC requires at least 30 days to review a licensee’s initial consultation package, and a meeting between the licensee and federal and state agencies typically takes place between 30 and 60 days after the initial consultation package is prepared. During the pre-application consultation phase, the licensee must consult with officials at federal and state land and resource agencies, as well as those representing affected Indian tribes, who identify studies the licensee should undertake to determine the project’s impacts on fish and wildlife, recreation, water, and other resources. If the licensee disagrees with the need for a study, FERC may be asked to resolve the dispute. After completing the agreed-upon studies, the licensee prepares a draft application and obtains comments from, and attempts to resolve any disagreements on needed actions with, the relevant federal and state agencies. The post-application analysis phase begins when the licensee files a formal application to obtain a new license. This filing must occur at least 2 years before the license expires. The application is a comprehensive, detailed document specifying the project’s proposed operations, its anticipated impact on resources and other land uses, and proposed actions to mitigate adverse effects. FERC reviews the application to ensure that it meets all requirements and then asks relevant federal and state land and resource agencies to formally comment on it. Depending on the comments and its own independent analysis of the application, FERC may ask the licensee to provide additional data and studies. When FERC is satisfied that these are sufficient, it conducts an environmental analysis under the National Environmental Policy Act of 1969 (NEPA) and an economic analysis of the project’s benefits and costs. In addition to using FERC’s NEPA analysis, affected federal land and resource agencies frequently conduct separate environmental analyses under NEPA, or assessments under other laws, to determine the license terms and conditions to be prescribed or recommended to protect or enhance fish, wildlife, and other resources. FERC reviews these terms and conditions and, if necessary, negotiates with the relevant federal and state land and resource agencies or affected Indian tribes on the license’s terms and conditions. In October 1997, FERC issued an order codifying an alternative licensing process. Similar to the traditional licensing process, the alternative licensing process is divided into pre-application and post-application phases. (See app. II.) The licensee may choose the alternative licensing process, if it can demonstrate that all the participants agree on its use, subject to final approval by FERC. The alternative licensing process shortens the process by combining many of the earlier consultations and studies with the later analyses in the pre- application phase. For example, the licensee begins a preliminary NEPA analysis during the pre-application phase rather than having FERC begin the NEPA analysis during the post-application phase. The alternative licensing process also seeks to improve communication and collaboration among the participants in the process and often results in a “settlement agreement” at the end of the pre-application phase. This agreement, signed by all the participants in the process, includes the conditions to protect and enhance resources. Beginning the NEPA analysis and reaching agreement on license conditions in the pre-application phase are intended to shorten the post-application analysis phase. Some licensees use a hybrid licensing process that often combines the structured sequence of the traditional licensing process with the improved earlier consultation and collaboration of the alternative licensing process. Under this process, a licensee may try during the pre-application phase to achieve a settlement agreement among participants, but reserve the option to use the traditional process in instances when agreement cannot be reached. A further difference is that FERC conducts the NEPA analysis during the post-application phase rather than having the licensee begin the analysis during the pre-application phase as under the alternative licensing process. The licensing process is complete when FERC either issues a license or denies the license application. However, FPA provides for subsequent administrative and judicial reviews of a FERC license decision. Any party to the licensing process may file an application for a rehearing with FERC within 30 days of FERC’s licensing decision. FERC subsequently issues an order (decision) on the application for a rehearing. Any party to the licensing process may also obtain a judicial review of FERC’s decision in the relevant federal appeals court within 60 days after FERC’s order on the application for a rehearing. FERC often delays implementation of contested license conditions until the reconsideration phase is completed. FERC and other participants in the licensing process acknowledge that the process is far more complex, time-consuming, and costly today than it was when FERC issued the approximately 1,000 original hydropower licenses 30 to 50 years ago. FERC must now attempt to balance and make tradeoffs among competing economic and environmental interests and to improve the environmental performance of projects while preserving hydropower as an economically viable energy source. Balancing these interests and making the necessary tradeoffs lengthen the process and make it more costly. FPA remains the basic statutory authority governing the licensing of hydropower projects. However, the Electric Consumers Protection Act of 1986 amended section 4(e) of FPA to require FERC to give “equal consideration” to water power development and other resource needs, including protecting and enhancing fish and wildlife, in deciding whether to issue an original or a renewed license. In addition, environmental and land management laws—enacted primarily during the 1960s and 1970s—require other participating federal and state agencies to address specific resource needs, including protecting endangered species, achieving clean water, and preserving wild and scenic rivers. For example, section 7 of the Endangered Species Act of 1973 represents a congressional design to give greater priority to the protection of endangered species than to the primary missions of FERC and other federal agencies. FERC, like all other federal agencies, must ensure that its actions, including licensing decisions, are not likely to jeopardize the existence of endangered and threatened species. Moreover, NEPA requires each federal agency, including FERC, to assess the environmental impact of proposed actions—which can include licensing decisions—that may significantly affect the environment. NEPA is designed to compel federal agencies to consider the environmental impacts of their actions and to inform the public that these impacts have been taken into account prior to reaching decisions. FPA authorizes federal and state agencies other than FERC to influence license terms and conditions, and in some instances, precludes FERC from altering license conditions imposed by other agencies. For instance, section 4(e) of FPA makes licenses for projects on federal lands reserved by the Congress for other purposes—such as national forests—or that use surplus water from federal dams subject to mandatory conditions imposed by the head of the federal agency responsible for managing the lands or facilities. Today, these agencies include the Forest Service, U.S. Department of Agriculture, and the Department of the Interior’s Bureau of Land Management, Fish and Wildlife Service, Bureau of Indian Affairs, and Bureau of Reclamation. Similarly, section 18 of FPA requires FERC to include license conditions for fish passage prescribed by federal fish and wildlife agencies. These agencies now include Interior’s Fish and Wildlife Service and the National Marine Fisheries Service in the Department of Commerce. In addition, the Electric Consumers Protection Act of 1986 added section 10(j) to FPA. This section authorizes federal and state fish and wildlife agencies to recommend license conditions to benefit fish and wildlife that FERC must include in the license unless it (1) finds them to be inconsistent with law and (2) has already established license conditions that adequately protect fish and wildlife. Moreover, section 401 of the Clean Water Act—added in 1972—requires anyone seeking a license or permit for a project that may affect water quality to seek approval from the relevant state water quality agency. States have begun to use section 401 to influence license terms and conditions. The regulations adopted by FERC under FPA require FERC to involve the public in the licensing process. Members of the public may express their views on resource needs that they believe need to be addressed in an application to obtain a license. They may also submit comments and recommendations, request scientific studies, and formally intervene in the licensing process. As an intervenor, a member of the public is entitled, among other things, to request a rehearing of a license decision by FERC or to obtain judicial review of FERC’s decision in the relevant federal appeals court. Public values have changed over the past 30 to 50 years and now reflect a growing concern about the environmental impacts of hydropower projects. Environmental groups and others view the licensing of a hydropower project as a once-in-a-lifetime opportunity to have these values and concerns considered. Changing public values, coupled with requirements to give equal or greater consideration to environmental concerns than to hydropower generation, have resulted in new license conditions intended to protect and enhance fish, wildlife, and other resources. For example, in an effort to reduce the risk to fish resources, new licenses may include conditions that require licensees to change minimum streamflows, construct fish-passage facilities, install screens and other devices to prevent fish from being injured or killed, limit the amount or timing of reservoir drawdowns, or purchase or restore lands affected by a project. FERC, federal and state land and resource agencies, licensees, environmental groups, and other participants in the licensing process do not agree on whether further reforms are needed to reduce process- related time and costs. Some within and among these diverse parties believe that the time and money spent on licensing a project reflect the level of complexity of the issues involved and that recent reforms will likely reduce the time and cost needed to obtain a license. Conversely, others believe that recent reforms will do little to reduce time and costs. However, they cannot agree on what further reforms are needed to shorten the process and make it less costly. Some participants believe that the time and money spent on project licensing reflect the level of complexity of the issues involved. They consider the process to be worthwhile as long as it results in a new license that is legally defensible, scientifically credible, and more likely to protect resources over the term of the license. Some of these participants also believe that recent reforms will likely reduce the time and costs associated with obtaining a new license and that additional reforms may not be necessary. For example, they believe that, when compared with projects using the traditional licensing process, projects using FERC’s relatively new alternative licensing process are more likely to obtain licenses before their old ones expire and less likely to have their license decisions delayed as a result of administrative and judicial reviews. Other recent reforms that these participants believe might shorten the licensing process or make it less costly include the following: A January 2001 policy by the departments of the Interior and Commerce that would, for the first time, (1) standardize the way that the two departments consider input and comments on mandatory license conditions and (2) ensure that public participation does not delay the licensing process. A series of recently issued reports by an interagency task force— established in the winter of 1998 by FERC and other federal agencies involved in the licensing process—that addresses practical ways to improve the process and make it more efficient. A February 2000 report by a national review group convened by the Electric Power Research Institute—a research consortium created by the nation’s electric utilities. In the report, licensees, federal and state agencies, tribes, and nongovernmental organizations (1) share their licensing experiences and “lessons learned” and (2) provide participants in licensings with reasonable solutions and alternative approaches to “tough” licensing issues. Other participants in the licensing process believe that recent reforms will do little to reduce the time and costs to obtain a new license. For example, they believe that licensees and other participants will not use FERC’s alternative licensing process for projects that involve contentious issues or when participants have conflicting values and concerns. They also believe that, while the alternative licensing process may shorten the time required to obtain a new license, it may also be more costly than the traditional licensing process. Therefore, they believe that further administrative reforms or legislative changes are needed to shorten the process and make it less costly. However, these participants cannot agree on what further reforms are needed to shorten the process and make it less costly. For instance, some environmental groups believe that certain licensees deliberately prolong the licensing process to delay the sometimes substantial costs of complying with new license conditions. Conversely, some licensees believe that federal and state land and resource agencies prolong the process and increase the costs to obtain a new license by (1) requesting unnecessary studies; (2) not reviewing licensing applications in a timely manner; (3) analyzing or reanalyzing issues at different steps in the process without any clear sequence leading to their timely resolution; and (4) insisting on unreasonable, and sometimes conflicting, license conditions. Federal and state land and resource agencies, however, counter these claims, saying that licensings are sometimes delayed because, until FERC requires them to, licensees are unwilling to conduct studies or to provide additional information required for the agencies to fulfill their statutorily mandated missions and responsibilities. In addition, many licensees, federal and state agencies, and environmental groups believe that FERC has not provided necessary leadership and direction, especially during the pre-application consultation phase, when much of their process-related time and costs can be incurred. In addition to blaming each other, these proponents of further reforms to reduce the time and costs to obtain a new license cannot agree on what reforms are needed to shorten the process and make it less costly. Some believe that additional administrative reforms can improve the process and make it more efficient. Others, however, believe that new legislation will be required. To reach informed decisions on the effectiveness of recent reforms to the licensing process and the need for further reforms, FERC must complete two tasks. First, it needs complete and accurate data on process-related time and costs by participant, project, and process step. Currently, FERC does not systematically collect much of these data. Second, FERC needs to identify why certain projects or groups of projects displaying similar characteristics take longer and cost more to license than others and why the time and costs required to complete certain process steps vary by project or group of similar projects. FERC has yet to link the time and cost data that it has collected to projects displaying similar characteristics, and instead is relying, in part, on observations and suggestions of parties involved or interested in the licensing process. However, without complete and accurate time and cost data and the ability to link time and costs to projects, processes, and outcomes, FERC cannot assess the extent to which the observations and suggestions—or any recommended administrative reforms or legislative changes—might shorten the process or make it less costly. Data on where in the process costs are incurred and by whom are needed to reach informed decisions about the effectiveness of recent reforms to the licensing process and the need for further reforms to reduce the process-related costs of obtaining a hydropower license. However, FERC lacks much of the required data for itself, other federal and state agencies, and licensees. For example, FERC cannot systematically separate its process-related licensing costs from other hydropower-program-related costs or link the costs to specific projects or steps in the licensing process. FERC also cannot identify other federal agencies’ actual costs to participate in the licensing process. Each year FERC requests federal agencies to report their hydropower-program-related costs for the prior fiscal year; however, it does not provided clear guidance to the other agencies on what costs they should report. As a result, federal agencies do not report millions of dollars of process-related costs. Moreover, FERC does not request federal agencies to break down their costs by project or by step in the licensing process. As a result, it cannot link the hydropower-program-related costs reported by other federal agencies to either specific projects or to the various steps in the process. In addition, FERC does not request, and states generally do not report, their process-related licensing costs. Similarly, FERC does not request licensees to report their process-related licensing costs. Some licensees have, however, voluntarily reported these costs to FERC so that FERC can include them—together with estimated mitigation costs, annual charges, and the value of power generation lost at relicensing—in its economic analysis of the projects’ benefits and costs. As of February 2001, FERC had compiled data on licensees’ process- related licensing costs for 83—or about 20 percent—of the 395 projects with licenses pending or issued between January 1, 1993, and December 31, 2000. However, because FERC did not provide licensees with guidance on what costs they should report, it has no assurance that the reported costs are consistent and comparable. Moreover, since the 83 projects did not represent a randomly selected sample, FERC cannot use these data to project the costs incurred by the universe of 395 projects. Moreover, FERC often could not link the costs to the various steps in the licensing process to identify which steps were the most costly. Finally, licensees reported only those costs that they incurred before they filed a formal application to FERC to obtain a new license and, thus, FERC has no data on any of their costs associated with the post-application analysis phase of the licensing process. Because a project proceeds through sequential phases, stages, and steps in the licensing process, process-related time data are more readily available than process-related cost data, which vary by participant. However, the time data that FERC has collected are incomplete and limited almost entirely to the post-application analysis phase of the process. FERC collected time data for the 180 projects with licenses expiring between January 1, 1994, and December 31, 2000. However, it collected data for only one step in the pre-application consultation phase of the licensing process. According to FERC, this phase generally requires 3 years or more to complete and constitutes, on average, more than 60 percent of the total time required to obtain a license. Moreover, FERC notes that the collected data on the one step in the pre-application consultation phase are incomplete because FERC did not request licensees to report when they completed the step. In addition, FERC is not collecting time data for administrative and judicial reviews of its license decisions, although FERC often delays the implementation of contested license conditions until these reviews are completed. Therefore, the time associated with administrative and judicial reviews should be included in the time required to obtain a license, according to many participants in the licensing process. When FERC completes its data collection efforts, it will have some process-related cost data (mostly from the pre-application consultation phase), and some process-related time data (mostly from the post- application analysis phase). However, FERC will not know why certain projects or groups of projects that display similar characteristics take longer and cost more to license than others or why the time and costs to complete certain steps in the process vary by project or group of similar projects. FERC needs to link time and costs to project, process, and outcome characteristics in order to reach informed decisions on the effectiveness of recent reforms to the licensing process, as well as the need for further reforms to the process. Project characteristics might include whether the project has considerable generating capacity, is operated for peak power production, is on federal land, or affects the habitat of one or more endangered or threatened species. Process-related characteristics might include (1) whether FERC had to resolve a dispute between the licensee and a federal or state agency, (2) whether federal and state agencies prescribed new mandatory license conditions, (3) whether FERC rejected or modified new license conditions recommended by federal and state agencies, or (4) whether parties formally intervened in a licensing. Outcome-related characteristics might include whether power generation was lost at relicensing or whether the terms and conditions of a new license compromise the project’s economic viability or environmental performance. As part of its mandated review of its licensing process, FERC held public meetings in six different cities. It also asked for written comments and distributed a questionnaire. In their oral and written comments and in their responses to the questionnaire, parties offered their observations and suggestions on how the process might be shortened or made less costly. However, without complete and accurate time and cost data and the ability to link time and costs to projects, processes, and outcomes, FERC will not be able to assess the extent that any of these observations and suggestions—or any administrative reforms or legislative changes that they may recommend—might (1) reduce the time and costs to obtain a license or (2) change the outcomes of the process. Thus, FERC will not be able to adequately assess the tradeoffs between efficiency and effectiveness, quickness and quality. FERC recognizes the importance of collecting complete, accurate, and timely data on which to base informed decisions. However, it has not established a schedule with firm deadlines for developing a system that tracks process-related time and costs, nor has it developed a process to share these data with other parties involved or interested in the process. Currently, FERC’s data on the licensing process are widely dispersed throughout FERC, often not comparable, and time-consuming and resource-intensive to collect. For example, to respond to its mandate to review its licensing process, FERC gathered time data from (1) various external and internal information and tracking systems, (2) independent databases and spreadsheets, (3) document storage and retrieval systems, (4) project-specific documents, (5) staff files, (6) various studies conducted for various purposes during the past several years, and (7) other data sources. These data were often not comparable, and FERC staff often had to link them manually to one another. To address its information technology needs, in 1999, FERC completed a review of its existing information and tracking systems. Subsequently, FERC performed a needs assessment that showed, on a macro level, how it planned to receive, generate, organize, and present information to users. In February 2001, FERC prepared a preliminary draft of its long-term vision for its hydropower-program-related data and information technology needs. FERC officials told us that their future plans include the release of a detailed document that will define needed enhancements to FERC’s information and tracking systems. However, FERC has not established a schedule with firm deadlines to implement the long-term vision of its hydropower-program-related data and information technology needs. It also has not determined what, if any, cost data to include. Lastly, the Congress directed FERC to conduct the review of its licensing process “in consultation with other appropriate agencies.” However, despite repeated requests by federal land and resource agencies, as of April 20, 2001, FERC had not provided them with a draft of its report or with any of the process-related time and cost data that it had collected and analyzed. As a result, Interior had to independently collect and analyze data from FERC’s information and tracking systems. On the basis of its analysis, Interior observed that it could not “determine why processing times are what they are, let alone whether these time periods are excessive or necessary for deliberative decision-making.” It continued that the “parties are engaged in numerous activities during the licensing process, and to determine the extent to which each activity contributes to the processing time calls for a more elaborate type of analysis.” Therefore, Interior recommended that it join with FERC to build a data set for all projects licensed by FERC and that the data be used to identify what, if any, further reforms are needed to shorten the process. FERC, federal and state land and resource agencies, licensees, environmental groups, and other participants in the licensing process acknowledge that the process to obtain a license is far more complex, time-consuming, and costly today than it was 30 to 50 years ago when FERC issued the approximately 1,000 original hydropower licenses. Today, FERC faces a formidable challenge in issuing a license that is legally defensible, scientifically credible, and likely to protect fish, wildlife, and resources while still preserving hydropower as an economically viable energy source. Participants in the licensing process do not agree on the effectiveness of recent reforms to the process or on the need for further reforms to shorten the process or make it less costly. To resolve this disagreement and to reach informed decisions on the effectiveness of recent reforms and the need for further administrative reforms or legislative changes, FERC needs (1) a system that collects complete and accurate data on process-related time and costs by participant, project, and process step and (2) the ability to link time and costs to projects displaying similar characteristics. To date, FERC has been reluctant to work with other process participants to (1) develop a system to collect and share process-related time and cost data and (2) link the data to projects displaying similar characteristics in order to identify those project, process, and outcome characteristics that can increase the time and costs to obtain a license. As a result, FERC will not be able to reach informed decisions on the need for further administrative reforms or legislative changes to the licensing process. We recommend that the Federal Energy Regulatory Commission inform the Congress of the extent that time and cost data limitations restrict its ability to reach informed decisions on whether further administrative reforms or legislative changes are needed to shorten the hydropower licensing process or make it less costly. We also recommend that the Commission work with other federal and state agencies and licensees to (1) collect complete and accurate data on process-related time and costs by participant, project, and process step and (2) link time and costs to projects displaying similar characteristics in order to identify those project, process, and outcome characteristics that can increase the time and costs to obtain a license. In addition, we recommend that the Commission (1) establish a schedule and firm deadlines for implementing the necessary enhancements to its management information systems that are required to track and analyze process-related time and costs and (2) share these data with other parties involved or interested in the process. We provided a draft of this report to the Chairman of FERC for his review and comment. FERC generally agreed with our characterization of the licensing process and the primary issues that affect time and costs. It also agreed that it does not systematically collect complete and accurate data on process-related time and costs by participant, project, and process step. However, FERC believes that these data are not needed to reach informed decisions on the effectiveness of recent reforms to the licensing process as well as the need for further reforms to the process. Rather, it thinks that it can address the salient issues by developing “targeted analyses” to determine major factors affecting licensing time and costs based, in part, on its “years of experience” with the licensing process. However, we continue to believe that good time and cost data are needed to reach good decisions. Without such data, it will not be possible for the Commission to determine how much either can be reduced. Moreover, without these data and the ability to link time and costs to projects, processes, and outcomes, FERC increases the risk that any reforms that it recommends may not only not reduce process-related time and costs but also result in unintended consequences to the outcomes of the process. FERC’s comments and our responses appear in appendix IV. We conducted our work from August 2000 through April 2001 in accordance with generally accepted government auditing standards. Appendix III contains the details of our scope and methodology. We are sending copies of this report to the Honorable Norm Dicks, Ranking Minority Member, Subcommittee on Interior and Related Agencies, House Committee on Appropriations, and the Honorable Curt Hebert, Jr., Chairman, Federal Energy Regulatory Commission. The report is also available on GAO’s home page at http://www.gao.gov. If you have any questions about this report, please call Charles S. Cotton or me at (202) 512-3841. Key contributors to this report are listed in appendix V. Public notice (acceptance) Additional study request, if any Public notice of application (tendering) Study comments, additional studies (if needed) Scoping comments, study request Initial meeting (Initial Information Package/Scoping) Concerned about the licensing of nonfederal hydropower projects, Representative Ralph Regula, former Chairman, Subcommittee on Interior and Related Agencies, House Committee on Appropriations, asked us to identify and assess significant issues related to the licensing process. As agreed, this report discusses (1) why the licensing process now takes longer and costs more than it did when FERC issued most original licenses several decades ago; (2) whether participants in the licensing process agree on the need for, and type of, further reforms to the process to reduce time and costs; and (3) whether available time and cost data are sufficient to reach informed decisions on the effectiveness of recent reforms and the need for further reforms to the process. To identify how the licensing process has changed since FERC issued most of its original licenses several decades ago, we reviewed relevant laws, regulations, court decisions, and guidance affecting hydropower licensing. We interviewed officials from FERC, federal land and resource agencies, states, industry, and nongovernmental organizations involved in the licensing process. We also reviewed pertinent documents from these sources as well as other independent analyses from academia and the private sector. To identify the extent of agreement among participants in the licensing process on the need for, and type of, reforms to the process, we (1) attended all six of the public meetings that FERC held in January 2001 and (2) reviewed the formal written comments provided to FERC by February 1, 2001, as part of its statutorily required review of the licensing process. We also met with and obtained data from federal and state agencies, licensees, industry, nongovernmental organizations, and academia. In addition, we reviewed pertinent documents, including congressional testimonies. We also visited two hydropower projects currently involved in the licensing process, and interviewed participants involved in several other recent or ongoing licensing processes. To identify the availability of time and cost data on which to base FERC’s May 8, 2001, report on reducing process-related time and costs, we reviewed FERC databases as well as those of other federal agencies and nongovernmental organizations involved in the process. We also interviewed officials from FERC, federal and state land and resource agencies, industry, and nongovernmental organizations. We assessed the adequacy of FERC’s data and information systems by examining the scope and content of its project files and databases. We then held interviews with FERC project managers, information specialists, and analysts to determine the availability of project and step-specific data on processing time and costs. In addition, we examined a survey instrument developed by FERC to gather information on licensing time and costs from participants at the public meetings. We also reviewed FERC’s strategic plan for fiscal years 2000 through 2005 prepared under Government Performance and Results Act of 1993 as well as its plans to enhance its existing information and tracking systems. We also interviewed FERC officials concerning their future information and technology plans. We conducted our work from August 2000 through April 2001 in accordance with generally accepted government auditing standards. Comment 1: FERC states that our audit work concluded almost 3 months ago and suggests that our conclusions are premature. However, nothing has changed during the intervening 3 months. FERC still does not have the data needed to reach informed decisions on the effectiveness of recent reforms to the licensing process or on the need for further reforms to the process. As reflected in their comments below, FERC’s position has not changed. It does not believe that it needs to systematically collect complete and accurate data on process-related time and costs by participant, project, and process step to reach informed decisions on the effectiveness of recent reforms to the licensing process as well as the need for further reforms to the process. Rather, it thinks that it can address the salient issues by developing “targeted analyses” to determine major factors affecting licensing time and costs based, in part, on its “years of experience” with the licensing process. However, we continue to believe that good data are needed to reach good decisions. Moreover, without complete and accurate time and cost data and the ability to link time and costs to projects, processes, and outcomes, FERC increases the risk that any reforms that it recommends may not only not reduce process-related time and costs but also result in unintended consequences to the outcomes of the process. Comment 2: According to FERC, systematically collecting complete and accurate data on process-related time and costs by participant, project, and process step would “divert money and staff away from the licensing process.” Conversely, we believe that the money would be well spent, if it resulted in informed decisions on the need for further reforms to the licensing process to reduce time and costs. In fact, FERC’s comment seems inconsistent with its own strategic plan. In its Strategic Plan for Fiscal Years 2000-2005, FERC states that “accurate and timely information is essential for external customers and staff alike.” Therefore, we did not make any changes to the report on the basis of this comment. Comment 3: We disagree with this comment. FERC states that requiring or requesting that licensees, federal and state agencies, tribes, non- government organizations, and members of the public provide additional time and cost data would “burden these entities unduly.” FERC also asserts that it cannot compel federal agencies to submit additional time and cost data. We recognize that providing the data will take time and cost money. However, we fail to see how doing so would unduly burden participants in the licensing process. Obtaining a license is not a yearly event. Rather, it occurs once every 30 to 50 years. Moreover, we never recommended or suggested that FERC collect time and cost data from tribes, non-government organizations, and members of the public. In addition, while FERC cannot compel federal agencies to submit additional time and cost data, it is not prohibited from requesting that the agencies provide this information and federal agencies appear willing to do so. For instance, in responding to our June 2000 report on recovering federal hydropower licensing costs, federal agencies agreed to ensure that their financial management and reporting systems were capable of producing accurate, timely, and reliable information on hydropower-program-related administrative costs. Comment 4: We did not make any changes to the report on the basis of this comment. FERC observes that “it has always been charged under FPA section 10(a)(1) with balancing all relevant public interest considerations.” While this statement is true, congressional dissatisfaction with FERC’s efforts to carry out this responsibility led to a 1986 amendment to FPA, which required FERC to give “equal consideration” to water power development and other resource needs, including protecting and enhancing fish and wildlife, when deciding whether to issue an original or a renewed license. This amendment was one of a series of statutes enacted subsequent to the passage of FPA that specifically required FERC and other federal agencies to consider resource needs in addition to water power development. We used the 1986 amendment to illustrate the increasing complexity of the licensing process. Comment 5: We agree that FERC is not charged with assuring that hydropower projects it licenses are economically viable. However, as stated in its September 2000 strategic plan, FERC does attempt to “optimize hydropower benefits by improving the environmental performance of projects while preserving hydropower as an economically viable energy source.” Since the language in our report is consistent with the language in FERC’s strategic plan, we did not make any changes to the report on the basis of FERC’s comment. Comment 6: We revised the report to state that FERC does not systematically collect much of the needed time and cost data. Comment 7: We revised the report to make clear that, if a license expires while a project is undergoing relicensing, FERC issues an annual license, allowing a project to continue to operate under the conditions found in the original license until the relicensing process is complete. Comment 8: We revised the report to add “affected Indian tribes.” Comment 9: We agree with FERC that only parties to the licensing process may (1) file an application for a rehearing with FERC within 30 days of FERC’s licensing decision and (2) obtain a judicial review of FERC’s decision in the relevant federal appeals court within 60 days after FERC’s order on the application for a rehearing. Therefore, we revised the report accordingly. Comment 10: We revised the report to state that FPA authorizes federal and state agencies, other than FERC, to influence license terms and conditions, and in some instances, precludes FERC from altering license conditions imposed by other agencies. Comment 11: We revised the report to state that section 4(e) of FPA makes licenses for projects on federal lands reserved by the Congress for other purposes—such as national forests—or that use surplus water from federal dams subject to mandatory conditions imposed by the head of the federal agency responsible for managing the lands or facilities. Comment 12: We mentioned section 10(j) to emphasize the increased role of federal and state fish and wildlife agencies in the licensing process sine enactment of the Electric Consumers Protection Act of 1986. We revised the report to more clearly reflect this. Comment 13: We revised the report to delete “adversely.” Comment 14: We revised the report to state that participants who believe that further reforms are needed to reduce the time and costs to obtain a new license cannot agree on what further reforms are needed to shorten the process and make it less costly. Comment 15: FERC notes that our report states that many licensees, federal and state agencies, and environmental groups believe that FERC has not provided necessary leadership and direction; however, we do not cite what is lacking. FERC then provides examples of recent actions that it has taken that it believes provide leadership and direction. We recognize that FERC has taken actions intended to shorten the licensing process or make it less costly and provide examples of these actions under the subcaption “Some Licensing Participants Are Satisfied With the Current Process.” We also cite one of the most often mentioned concerns about FERC; that is, its lack of leadership and direction during the pre- application consultation phase when much of their process-related time and costs can be incurred. Moreover, FERC is aware of these concerns since they were raised at the public meetings that FERC held as part of its mandated review of its licensing process. Comment 16: We revised the report to state that, as of February 2001, FERC had compiled data on licensees’ process-related licensing costs for 83—or about 20 percent—of the 395 projects with licenses pending or issued between January 1, 1993, and December 31, 2000. FERC did not provide us with any new data subsequent to February 2001. Comment 17: We revised the report to make a clearer the link between data and outcomes. Specifically, we state that without complete and accurate time and cost data and the ability to link time and costs to projects, processes, and outcomes, FERC will not be able to assess the extent to which the observations and suggestions—or any administrative reforms or legislative changes that it may recommend—might (1) reduce the time and costs to obtain a license or (2) change the outcomes of the process. Comment 18: We did not make any changes to the report on the basis of this comment. Rather, FERC’s comment, which does not identify any completion dates for either phase of its system development, supports our finding that it has not established a schedule with firm deadlines for developing a system to track process-related time and costs. Comment 19: We revised the report to delete the two sentences in question. The one-time difficulties incurred in shifting from an old tracking system to a new one are not germane to our finding that FERC has not established a schedule with firm deadlines for developing a system to track process-related time and costs. Comment 20: FERC states that it is not aware of its staff’s refusal to share data with other agencies. However, documentation that we obtained from the Department of the Interior shows that Interior asked for, but was not provided, the process-related time and cost data that FERC had collected. As a result, Interior had to independently collect and analyze process- related time data from FERC’s information and tracking systems. Comment 21: According to FERC, at a February 13, 2001, meeting, it requested that other federal agencies provide it with their process-related time and cost data. FERC states that it did not receive the data. However, as we reported in June 2000, FERC has not provided these agencies with guidance on what and how process-related costs should be reported and continues to decline to do so. Therefore, we did not make any changes to the report on the basis of this comment. Comment 22: Appendixes I and II correspond exactly to the sequence of steps in the handouts and viewgraphs presented by FERC at the public meetings held in six cities in January 2001 as part of its mandated review of the licensing process. Therefore, we did not make any changes to the report on the basis of these comments. In addition to those named above, Jerry Aiken, Paul Aussendorf, Erin Barlow, David Goldstein, Richard Johnson, Chester Joy, and Arvin Wu made key contributions to this report.
This report assesses the licensing process of the Federal Energy Regulatory Commission (FERC). Specifically, GAO examines (1) why the licensing process now takes longer and costs more than it did when FERC issued most original licenses several decades ago; (2) whether participants in the licensing process agree on the need for, and type of, further reforms to reduce time and costs; and (3) whether available time and cost data are sufficient to allow informed decisions on the effectiveness of recent reforms and the need for further reforms. GAO found that since 1986, FERC has been required to give "equal consideration" to, and make tradeoffs among, hydropower generation and other competing resource needs. Additional environmental and land management laws have also placed additional requirements on other federal and state agencies participating in the licensing process to address specific resource needs. GAO found no agreement between FERC, federal and state land resource agencies, licensees, environmental groups, and other participants in the licensing process on the need for further reforms to reduce process-related time and costs. Finally, available time and cost data are insufficient to allow informed decisions on the effectiveness of recent reforms. Without complete and accurate time and cost data and the ability to link time and costs to projects, processes, and outcomes, FERC cannot assess the extent to which the observations and suggestions--or any recommended administrative reforms or legislative changes--might reduce the process' length and costs.
The proportion of older people in the United States who may face challenges exercising the right to vote is growing. As of 2003, there were almost 36 million individuals aged 65 or older (12 percent of the population), and the majority have at least one chronic health condition. By 2030, those aged 65 and over will grow to more than 20 percent of the population. Disability increases with age, and studies have shown that with every 10 years after reaching the age of 65, the risk of losing mobility doubles. In many ways, lack of mobility and other types of impairments can diminish seniors’ ability to vote without some assistance or accommodation. With increased age, seniors will become more limited in their ability to get to polling places by driving, walking, or using public transportation. Once seniors arrive at the polling places, they may face additional challenges, depending on the availability of accessible parking areas, accessibility of polling places, type and complexity of the voting equipment, availability of alternative voting methods (such as absentee voting), and the availability of voting assistance or aids. Responsibility for holding elections and ensuring voter access primarily rests with state and local governments. Each state sets the requirements for conducting local, state, and federal elections within the state. For example, states regulate such aspects of elections as ballot access, absentee voting requirements, establishment of voting places, provision of election day workers, and counting and certifying the vote. The states, in turn have typically delegated responsibility for administering and funding state election systems to the thousands of local election jurisdictions— more than 10,000 nationwide—creating even more variability among our nation’s election systems. Although state and local governments are responsible for running elections, Congress has authority to affect the administration of elections. Federal laws have been enacted in several major areas of the voting process, including several that are designed to help ensure that voting is accessible for the elderly and people with disabilities. Most importantly, the Voting Accessibility for the Elderly and Handicapped Act (VAEHA), enacted in 1984, requires that political subdivisions responsible for conducting elections assure that all polling places for federal elections are accessible to elderly voters and voters with disabilities (with limited exceptions). Any elderly voter or voter with a disability assigned to an inaccessible polling place, upon his or her advance request, must be assigned to an accessible polling place or be provided with an alternative means for casting a ballot on the day of the election. Under the VAEHA, the definition of “accessible” is determined under guidelines established by each state’s chief election officer, but the law does not specify what those guidelines shall contain or the form those guidelines should take. Additionally, states are required to make available voting aids for elderly and disabled voters, including instructions printed in large type at each polling place, and information by telecommunications devices for the deaf. The VAEHA also contains a provision requiring public notice, calculated to reach elderly and disabled voters, of absentee voting procedures. HAVA also contains a number of provisions designed to help increase the accessibility of voting for individuals with disabilities. For example, under HAVA, voting systems for federal elections must be accessible for individuals with disabilities in a manner that provides the same opportunity for access and participation as for other voters. To satisfy this requirement, each polling place must have at least one voting system equipped for individuals with disabilities. In addition, the Secretary of Health and Human Services is required to make yearly payments (in an amount of the Secretary’s choosing) to each eligible state and unit of local government, and such payments must be used for (1) making polling places (including path of travel, entrances, exits, and voting areas) accessible to individuals with disabilities, and (2) providing individuals with disabilities with information about the accessibility of polling places. The Act also created the U.S. Election Assistance Commission (EAC) to serve, among other things, as a clearinghouse and information resource for election officials with respect to the administration of federal elections. For example, the EAC is to periodically conduct and make available to the public studies regarding methods of ensuring accessibility of voting, polling places, and voting equipment to all voters, including individuals with disabilities. Under HAVA, the EAC is also to make grants for carrying out both research and development to improve various aspects of voting equipment and voting technology, and pilot programs to test new technologies in voting systems. To be eligible for such grants, an entity must certify that it will take into account the need to make voting equipment fully accessible for individuals with disabilities. The Voting Rights Act of 1965 (VRA), as amended, provides for voter assistance in the voting room. Specifically, the VRA, among other things, authorizes voting assistance for blind, disabled, or illiterate persons. Voters who require assistance to vote by reason of blindness, disability, or inability to read or write may be given assistance by a person of the voter’s choice, other than the voter’s employer or agent of that employer or officer or agent of the voter’s union. Other laws also help to ensure voting access for the elderly and people with disabilities—albeit indirectly. For example, Title II of the Americans with Disabilities Act of 1990 (ADA) and its implementing regulations require that people with disabilities have access to basic public services, including the right to vote. However, it does not strictly require that all polling place sites be accessible. Under the ADA, public entities must make reasonable modifications in policies, practices, or procedures to avoid discrimination against people with disabilities. Moreover, no individual with a disability may, by reason of the disability, be excluded from participating in or be denied the benefits of any public program, service, or activity. State and local governments may comply with ADA accessibility requirements in a variety of ways, such as by redesigning equipment, reassigning services to accessible buildings or alternative accessible sites, or altering existing facilities or constructing new ones. However, state and local governments are not required to take actions that would threaten or destroy the historic significance of a historic property, fundamentally alter the nature of a service, or impose undue financial and administrative burdens. In choosing between available methods of complying with the ADA, state and local governments must give priority to the choices that offer services, programs, and activities in the most integrated setting appropriate. Title III of the ADA covers commercial facilities and places of public accommodation. Such facilities may also be used as polling places. Under Title III, public accommodations must make reasonable modifications in policies, practices, or procedures to facilitate access for individuals with disabilities. They must also ensure that no individual with a disability is excluded or denied services because of the absence of “auxiliary aids and services,” which include both effective methods of making aurally and visually delivered materials available to individuals with impairments, and acquisition or modification of equipment or devices. Public accommodations are also required to remove physical barriers in existing buildings when it is “readily achievable” to do so, that is, when it can be done without much difficulty or expense, given the entity’s resources. In the event that removal of an architectural barrier cannot be accomplished easily, the accommodation may take alternative measures to facilitate accessibility. All buildings newly constructed by public accommodations and commercial facilities must be readily accessible; alterations to existing buildings are required to the maximum extent feasible to be readily accessible to individuals with disabilities. Finally, the Older Americans Act of 1965 (OAA), as amended, supports a wide range of social services and programs for older persons. The OAA authorizes grants to agencies on aging to serve as advocates of, and coordinate programs for, the older population. Such programs cover areas such as caregiver support, nutrition services, and disease prevention. Importantly, the OAA also provides assistance to improve transportation services for older individuals. For older adults who wish to vote at polling places, access to the polls is highly affected by their ability to travel to the polling place on election day. While most older adults drive, their physical, visual, and cognitive abilities can deteriorate, making it more difficult for them to drive safely. One study found that approximately 21 percent (6.8 million) of people aged 65 and older do not drive, and another study found that more than 600,000 people aged 70 and older stop driving each year and become dependent on others for transportation. According to senior transportation experts, the “oldest of the old” (those aged 85 and older) are especially likely to be dependent on others for rides, particularly if they are also in poor health. For those who do not or cannot drive, our previous work for this committee on the mobility of older adults identified other options than driving that are available; nevertheless, transportation gaps remain. Consistent with the Older Americans Act and other legislation, the federal government provides some transportation assistance, but this is largely to provide older adults with access to other federal program services—such as health and medical care or employment. This has been done through partnerships with local agencies, nonprofits, and other organizations that provide transportation services and also contribute their own funds. Such partnering efforts may afford the opportunity to transport seniors to polling places as well. For example, the Montana Council on Developmental Disabilities partners with other organizations, such as AARP and the Montana Transit Association, to provide election day rides to older adults and people with disabilities. Still, we generally found that older adults in rural and suburban areas have more restricted travel options than do those in urban areas. In addition, we have reported that federally supported programs generally lacked data identifying the extent to which older adults have unmet needs for mobility. Consequently, we do not know to what extent older adults are unable to find transportation to polling places. To address this lack of data and improve transportation services, more than 45 states had utilized the “Framework for Action” by 2005, a self- assessment tool created by the Federal Interagency Coordinating Council on Access and Mobility (CCAM) for states and communities to help them identify existing gaps in transportation services for people with disabilities, older adults, and individuals with lower incomes. According to the CCAM, communities across the country are now using this tool as they establish coordinated transportation plans at the local level. Voting access is one need that might well be identified and better met through this assessment process. Our on-site inspections of polling places in the 2000 general election revealed many impediments that can limit access for older voters and voters with disabilities. Through our mail survey of states and local election jurisdictions conducted after the 2004 general election, we learned of improvements to provisions and practices pertaining to accessibility of polling places. We did not conduct on-site inspections in the 2004 general election and therefore do not know the extent to which such improvements took place at polling places. Once older voters reach the polling place, they generally must make their way inside the building and into the voting room in order to cast their votes. Prior to the 2000 election, very little was known about the accessibility of polling places—and what was known was dated and had significant limitations. To estimate the proportion of polling places in the country with features that might either facilitate or impede access for people with mobility, dexterity, or visual impairments, we visited 496 randomly selected polling places in the United States on Election Day 2000. Our random sample was drawn by first selecting a random sample of counties—weighted by population—and then randomly selecting some polling places within those counties. At each polling place, using a survey based on federal and nonfederal guidelines on accessibility, we took measurements and made observations of features of the facility and voting methods that could impede access. See figure 1 for the key areas at polling places where we conducted our observations. We also interviewed poll workers who were in charge of the polling place to identify any accommodations offered. These on-site inspections during the 2000 election revealed that only an estimated 16 percent of polling places were free of impediments that might prevent elderly voters and voters with disabilities from reaching voting rooms. The rest had one or more likely impediments from the parking area to the voting room, although curbside voting was often made available where permitted by the state (see fig. 2). These were potential impediments primarily for individuals with mobility impairments. Further, many polling places had more than one potential impediment in 2000. Impediments occurred at fairly high rates irrespective of the type of building used as a polling place. About 70 percent of all Election Day 2000 polling places were in the types of facilities that are potentially subject to either Title II or III of the ADA—such as schools, recreational/community centers, city/town halls, police/fire stations, libraries, and courthouses. However, under the ADA, only new construction and alterations must be readily accessible, and we did not determine the date that polling place facilities were either constructed or altered. Moreover, due to the number of possible approaches for meeting ADA requirements on accessibility to public services and because places of public accommodation need remove barriers only where it is easy to do so, we cannot determine from our data whether the potential impediments we found would constitute a failure to meet ADA requirements. In addition to inspecting polling places in 2000, we also reviewed state provisions (in the form of statutes, regulations, or policies) and surveyed state and county practices that affect voters’ ability to get into polling places and reach the voting room, and found significant variations. While all states and the District of Columbia had provisions concerning voting access for individuals with disabilities, the extent and manner in which these provisions addressed accessibility varied from state to state. For example, 43 states had provisions that polling places must or should be accessible, but only 20 had provisions requiring that reporting by the counties to the state on polling place accessibility. See table 1 in app. I for additional state provisions concerning the accessibility of polling places in the November 2000 election. Our survey of election officials in each state and 100 counties also revealed variation in practices for ensuring the accessibility of polling places. For example, while 25 states reported providing local governments with training and guidance for assuring polling place accessibility, only 5 states reported helping finance polling place modifications to improve access in 2000. At least an estimated 27 percent of local election jurisdictions reported not using accessibility in their criteria for selecting polling places. While at least an estimated 68 percent of local jurisdictions reported that they inspected all polling places, the frequency of such inspections varied from once a year to only when a polling place is first selected or following a complaint or remodeling. After the November 2004 general election, we found signs of improvement in access to polling places when we surveyed each state and representative sample of local election jurisdictions nationwide in 2005 about their state provisions and practices. While the methods we used to collect data from states differed between the 2000 and 2004 elections, state provisions related to polling place accessibility and accommodations nevertheless appear to have increased over time. For example, 32 states told us in 2005 that they required local jurisdictions to report on polling place accessibility to the state, an increase from 20 states with such provisions in 2000. At the same time, the number of states requiring polling place inspections decreased by 1 from 2000 to 2004, although 16 in addition to the 28 requiring inspections had provisions in 2004 that allowed for polling place inspections. See Table 2 in app. I for additional information on state provisions concerning accessibility of polling places and accommodations for individuals with disabilities for the November 2004 general election. In addition to changes in state provisions, most states reported that they had spent or obligated HAVA funds to improve the accessibility of polling places, such as by providing access for voters with mobility or visual impairments. Responding to our 2005 survey following the 2004 election, 46 states and the District of Columbia reported having spent or obligated HAVA funds for this purpose. For example, election officials in a local jurisdiction we visited in Colorado told us they had used HAVA funds to improve the accessibility of polling places by obtaining input from the disability community, surveying the accessibility of their polling places, and reviewing voting equipment with representatives of the blind community. From our 2005 survey of local election jurisdictions nationwide, we estimated 83 percent of local jurisdictions nationwide made use of their state’s provisions to determine the requirements for accessibility at their polling places. During our site visits to local jurisdictions in 2005, we asked election officials to describe the steps or procedures they took to ensure that polling places were accessible. Election officials in many of the jurisdictions we visited told us that either local or state officials inspect each polling location in their jurisdiction using a checklist based on state or federal guidelines. For example, election officials in the four jurisdictions we visited in Georgia and New Hampshire told us that state inspectors conducted a survey of all polling locations. Election officials in the two jurisdictions we visited in Florida told us that they inspected all polling places using a survey developed by the state. Our information of provisions and practices related to polling place accessibility in 2004 is based on self-reported data collected, and site visits we conducted, in 2005. We did not observe polling places during the 2004 election and therefore do not know the extent to which increased state provisions and reported state and local practices resulted in actual improvements to the accessibility of polling places in the 2004 general election. In preparing for and conducting the November 2004 general election, officials reported encountering many of the same challenges to ensuring voter access that they had encountered in 2000, such as locating a sufficient number of polling places that met requirements (such as accessibility). According to our 2005 mail survey, while 75 percent of small jurisdictions reported finding it easy or very easy to find sufficient number of polling places, only 38 percent of large jurisdictions did. Conversely, 1 percent of small jurisdictions found it difficult or very difficult while 14 percent of large jurisdictions did. Other challenges reported included recruiting and training an adequate supply of skilled poll workers, designing ballots that were clear to voters when there were many candidates or issues (e.g., propositions, questions, or referenda), having long lines at polling places, and handling the large volume of telephone calls received from voters and poll workers on election day. In general, officials in large and medium jurisdictions—those with over 10,000 people—reported encountering more challenges than those in small jurisdictions. Once inside the voting room, the type of voting method can pose particular challenges to some elderly voters, and facilitating voting may require further accommodation or assistance. For example, voters with dexterity impairments may experience difficulty holding writing instruments for paper ballots, pinpointing the stylus for punch card ballots, manipulating levers, or pressing buttons for electronic voting systems. Similarly, visually impaired voters may experience difficulty reading the text on paper ballots and electronic voting systems, or manipulating the handles to operate lever machines. All these voting methods can challenge voters with disabilities, although some electronic voting systems can be adapted to accommodate a range of impairments. During our on-site inspections of polling places in 2000, we identified challenges posed by the voting systems used and by the configuration of the voting booths, although some form of assistance was generally provided in the voting room. With respect to voting systems, we found that either traditional paper ballots or mark-sense ballots (a form of optical scan paper ballots) were the most widespread—one or the other were in use at an estimated 43 percent of polling places. This voting method is challenging for voters with impaired dexterity who have difficulty using a pen or pencil, and also for voters with visual impairments who need to read the text on the ballots. Next in prevalence were punch card ballots (21 percent), electronic voting systems (19 percent), and lever machines (17 percent)—each of which can be a challenge for voters with certain impairments. We also found that many voting booths were not appropriately configured for wheelchairs, either because voting stations configured for sitting did not have the minimum dimensions for a wheelchair or those configured for standing had one or more features that might pose an impediment to a wheelchair. At the same time, nearly all polling places allowed voters to be assisted either by a friend or a poll worker, which is a right granted by the VRA. Moreover, about 51 percent provided voting instructions or sample ballots in 18-point or larger type and about 47 percent provided a magnifying device. None of the polling places provided ballots or voting equipment adapted with audio-tape or Braille ballots for blind voters. Our 2000 review of state provisions and practices related to accessible voting systems and accommodations in the voting room revealed significant gaps, insofar as 27 states lacked provisions that voting systems should accommodate individuals with disabilities, 18 lacked provisions for wheelchairs in voting booths, and many lacked provisions to provide aides to the visually impaired; for example, 47 states lacked a provision to provide a large type ballot, and 45 lacked a provision to provide a Braille ballot. (See app. I, table 1.) On the other hand, we found that state provisions were not necessarily predictors of practice inside the polling place. For example, we found that half the polling places we visited provided voting instructions or sample ballots with large type even though only 3 of the 33 states whose polling places we visited had provisions to do so. Conversely, none of the polling places we visited provided for Braille ballots, even though 5 of the 33 states we visited had provisions for doing so. In addition to many states lacking provisions for voting room accommodations, in only 11 states did election officials, in response to our state survey, report financing improvements to accessibility by helping to fund new voting systems. Our 2005 survey of states also revealed an increase in state provisions for accessible voting equipment, compared to what we found in our review of state provisions in 2000. As of August 1, 2005, 41 states and the District of Columbia reported having laws in place or having taken executive action (through orders, directives, regulations, or policies) to provide each polling location by January 1, 2006, with at least one electronic voting system or other voting system equipped for individuals with disabilities. Five of the 9 remaining states reported plans to promulgate laws or executive action to provide each polling location with at least one voting system equipped for individuals with disabilities. This is an increase from 2000, when 24 states had (and 27 lacked) provisions that voting systems must or should accommodate individuals with disabilities. In response to our survey of local election jurisdictions in 2005, many jurisdictions reported having at least one accessible voting machine per polling place in the 2004 election, although this varied by jurisdiction size. We estimated that 29 percent of all jurisdictions provided at least one accessible voting machine at each polling place during the 2004 general elections. In addition, more large and medium local election jurisdictions reported using accessible voting machines than small jurisdictions. In 2005, we estimated that 39 percent of large jurisdictions, 38 percent of medium jurisdictions, and 25 percent of small jurisdictions provided accessible voting machines at each polling place. These improvements may be the result of HAVA, which, as noted earlier, requires each polling place to have at least one voting system equipped for individuals with disabilities, including individuals who are blind or visually impaired. To facilitate the adoption of technology, HAVA authorized appropriations to provide funds to states to replace punch card and lever voting equipment with other voting methods. Since HAVA’s enactment, the General Services Administration (GSA) reported in 2003 the distribution of an estimated $300 million to 30 states for funds to replace old voting equipment and technology. In addition, states may receive other HAVA funds that could be used for multiple purposes, including replacement or upgrade of voting systems. In 2004, the EAC reported that almost $344 million had been distributed to each of the 50 states and the District of Columbia under this multiple purpose funding category. HAVA notwithstanding, our surveys and site visits in 2004 indicated that significant challenges remain for acquiring and implementing accessible electronic voting systems. Touch screen direct recording electronic (DRE) equipment—which can be adapted with audio and other aids to accommodate a range of impairments—is generally more costly than other types of systems due to software requirements and because more units are required. Based on our mail surveys of local election jurisdictions, the estimated percentages of predominant voting methods used by local jurisdictions in the 2000 and 2004 general elections did not change appreciably. As we noted earlier, more large and medium local election jurisdictions reported using accessible electronic voting machines than small jurisdictions. Some election officials representing small jurisdictions expressed concerns to us about the appropriateness of HAVA requirements for accessible voting equipment for their jurisdictions and its implementation cost. In addition, some elections officials have acted on concerns regarding the reliability and security of electronic voting systems by, for instance, decertifying systems previously approved for use within their states. In 2007, we testified on the range of security and reliability concerns that have been reported, and long-standing and emerging challenges facing all levels of government, with respect to electronic voting systems. For example, significant concerns have been raised about vague or incomplete standards, weak security controls, system design flaws, incorrect system configuration, poor security management, and inadequate security testing, among other issues. Jurisdictions reported that they did not consistently monitor the performance of their systems, which is important for determining whether election needs, requirements, and expectations are met and for taking corrective actions when they are not. Finding remedies, however, is challenging, given, for example, the distribution of responsibilities among various organizations, and financing constraints and complexities. Given the diffused and decentralized allocation of voting system roles and responsibilities across all levels of government, addressing these challenges will require the combined efforts of all levels of government, under the leadership of the EAC. Our 2005 survey of state election officials revealed a marked increase since the 2000 election in the number of state provisions related to accommodations in the voting room. For example, the number of states that reported having provisions for wheelchair accommodations in voting areas was 43, compared to 33 in 2000. Further, the number of states that reported having provisions to require or allow ballots with large-type, magnifying instruments, and Braille ballot or voting methods increased by 18, 20, and 8, respectively. At the same time, a few states reported having provisions that prohibit certain accommodations, such as ballots in Braille or large type. (See app. I, table 2 for details on 2004 state provisions.) It is important to keep in mind, however, our findings for the 2000 election— i.e., that state provisions are not necessarily predictors or indicators of whether these accommodations will be found at polling places. Most recently, we reported on accommodations provided to bilingual voters, including elderly bilingual voters. Under the VRA, when the population of a “single language minority” with limited English proficiency is large enough, voting materials (including ballots, instructions, and assistance) must be provided in that minority’s language, in addition to English. Of the 14 election jurisdictions we contacted, 13 reported providing similar assistance, such as translated voter materials and bilingual poll workers. All 14 reported facing similar challenges, such as recruiting a sufficient number of bilingual poll workers, effectively targeting where to provide assistance, and designing and translating the bilingual materials provided. However, GAO found little quantitative data on the usefulness of various types of bilingual voting assistance. Jurisdictions were challenged to assess the effectiveness of such assistance, in part because jurisdictions may be prohibited from collecting data on who used such assistance. Thus, it is difficult to know the extent to which elderly voters use bilingual assistance and what forms of assistance they find most useful. As noted earlier, the VAEHA requires that any elderly voter or voter with a disability assigned to an inaccessible polling place, upon his or her advance request, must be assigned to an accessible polling place or be provided with an alternative means for casting a ballot on the day of the election. The VAEHA also contains provisions to make absentee voting more accessible by prohibiting, with limited exceptions, the requirement of a notary or medical certification of disability in granting an absentee ballot. However, states generally regulate absentee voting and other alternative voting method provisions. Alternative voting methods may include advance notice of an inaccessible polling place; curbside voting; taking ballots to a voter’s residence; allowing voters to use another, more accessible polling location either on or before election day; voting in person at early voting sites; or removing prerequisites by establishing “no excuse” absentee voting or allowing absentee voting on a permanent basis. Disability advocates have told us that while alternative voting methods are important and needed options for some voters with disabilities, they still do not provide an equal opportunity to vote in the same manner as the general public and therefore should not be viewed as permanent solutions to inaccessible polling places. Meanwhile, state provisions that allow for alternative voting methods had, in 2004, generally increased from the 2000 election period. Specifically, the number of state provisions permitting curbside voting increased from 28 in the 2000 election to 30 in the 2004 election. The number of states with provisions that provided for carrying ballots to voters’ residences on or before election day increased from 21 to 25. Additionally, state provisions regarding notification of voters of inaccessible polling places went up from 19 to 27. In addition, 21 states reported allowing voters to vote absentee without requiring a reason or excuse—3 more than for the November 2000 election. Although states may offer similar alternatives and accommodations, our review of state provisions in 2000 indicated that there may be wide variation in their implementation. For example, in accordance with the VAEHA, as previously mentioned, all states allowed absentee voting for voters with disabilities without notary or medical certification requirements in 2000. However, the dates by which absentee ballots must be received varied considerably, with some states requiring that, to be counted, the ballot must be received before election day. In addition, where states lacked provisions, or had provisions allowing but not requiring accommodation or alternative method of voting, county and local government implementation practices can vary. For example, in 2000, we found that in a number of states without formal provision for curbside voting, some counties and local governments reported offering curbside voting and some did not. Similarly, in a number of states that lacked provisions for allowing voters to use an alternate voting place on Election Day, our 2000 county survey data also showed that some counties and local governments offered this alternative, while others did not. Expanding alternative voting methods or making special accommodations can provide voters with additional options. Early voting, for example, allows voters, including elderly voters, to choose a day without inclement weather on which to vote. However, the implementation of voting alternatives can also present election officials with legal, administrative, and operational challenges. For example, expanding the use of curbside voting requires having staff trained and available to assist voters outside the polling place. In some states where it is not authorized or in practice, policymakers would need to be convinced that it would not increase the risk of fraud with ballots being taken out of the polling place facility. Similarly, reassigning voters to more accessible polling places requires officials to notify the voter, train the poll workers, and provide an appropriate ballot at the reassigned location. Election officials reported to us in 2001 that establishing early voting sites and expanding the number of absentee voters added to the cost and complexity of running an election. For example, with early voting, election officials must set up and close down the polling place daily, ensure that there are trained poll workers at each early voting site, and update the voter registration lists to be used on election day to indicate which voters have already voted early. Absentee voting challenges include receipt of late absentee voter applications and ballots; administrative issues including workload demands and resource constraints; dealing with potential voter error caused by unsigned or otherwise incomplete absentee applications and ballot materials; as well as guarding against fraud. Internet voting—an alternative that has been used only on a limited basis to date—could offer voters the convenience of voting from their homes or other remote locations, and help increase voter participation. On the other hand, numerous election officials and others have expressed concerns about the security and reliability of the Internet and lack of widespread access to it. To resolve these issues, studies by some task forces have suggested a phased-in approach to Internet voting. Ensuring that seniors or individuals with disabilities successfully cast their votes in an election requires government to think broadly about access, including access to transportation, access into buildings, access with respect to voting equipment, and access to various alternative voting methods. The increase in state provisions and reports of practices to improve the accessibility of the voting process is encouraging. At the same time, the complexity of our election systems is such that we cannot be assured that these provisions and reported practices reflect what actually occurs at polling places on election day. Understanding and addressing accessibility gaps is an enormous task for our state and local election officials who are challenged by the multiplicity of responsibilities and requirements they must attend to within resource constraints. At the same time, as our population ages, and with it the percent of voters with disabilities swells, the expectation of accommodation and assistance to participate in this basic civic exercise will grow, making accessibility a key performance goal for our election community. Appendix I: State Provisions for Accessibility of Polling Places and Accommodations for the November 2000 and 2004 Elections for rticr proviion were identified only if te did not hve either te or regtion for tht proviion. Inspections of polling place accessibility Reporting by local jurisdictions to the state on polling place accessibility Accommodations of wheelchairs in voting areas Provision of ballot or methods of voting in BrailleProvision of ballots with large type in one te reponded tht they did not know. ct voting by mil; thus, proviion for polling plce ccessibility re not pplicble. in one te did not repond to thition. Bilingual Voting Assistance: Selected Jurisdictions’ Strategies for Identifying Needs and Providing Assistance. GAO-08-182. Washington, D.C.: January 18, 2008. Elections: All Levels of Government Are Needed to Address Electronic Voting System Challenges. GAO-07-741T. Washington, D.C.: April 18, 2007. Older Driver Safety: Knowledge Sharing Should Help States Prepare for Increase in Older Driver Population. GAO-07-413. Washington, D.C.: April 11, 2007. Elections: The Nation’s Evolving Election System as Reflected in the November 2004 General Election. GAO-06-450. Washington, D.C.: June 6, 2006. Social Security Reform: Answers to Key Questions. GAO 05-193SP. Washington, D.C.: May 2, 2005. Transportation-Disadvantaged Seniors: Efforts to Enhance Senior Mobility Could Benefit. from Additional Guidance and Information. GAO-04-971. Washington, D.C.: August 30, 2004 Elections: A Framework for Evaluating Reform Proposals. GAO-02-90. Washington, D.C.: October 15, 2001. Elections: Perspectives on Activities and Challenges Across the Nation. GAO-02-3. Washington, D.C.: October 15, 2001. Voters with Disabilities: Access to Polling Places and Alternative Voting Methods. GAO-02-107. Washington, D.C.: October 15, 2001. Elections: The Scope of Congressional Authority in Election Administration. GAO-01-470. Washington, D.C.: March 13, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Voting is fundamental to our democratic system, and federal law generally requires polling places for federal elections to be accessible to older voters and voters with physical disabilities. Following reports of problems encountered in the close 2000 presidential election with respect to voter registration lists, absentee ballots, ballot counting, and antiquated voting systems, the Help America Vote Act of 2002 (HAVA) was enacted. Among other provisions, HAVA includes requirements for the accessibility of voting systems, effective January 1, 2006. In the past, GAO has published several reports on issues related to voting access for older voters. Our prior work, including on-site inspections of a national sample of polling places in election year 2000, a comprehensive review of the election system in 2004, and a review of transportation issues facing seniors, has identified a number of potential barriers to voting for older Americans, as well as accommodations and progress in a number of areas. Drawing from prior work, GAO's testimony will focus on (1) a variety of factors that affect the ability of older voters to travel to polling places, cast their votes in the voting room, or avail themselves of alternative voting provisions and (2) trends and changes regarding the accessibility of polling places and alternative voting methods. Ensuring that older voters or other individuals with disabilities successfully cast their votes in an election requires that policymakers think broadly about access. This includes access with respect to transportation, polling places, voting equipment, and alternative voting methods. During the 2000 election, most polling places we inspected had one or more potential impediments that might prevent older voters and voters with disabilities from reaching voting rooms, although curbside voting accommodations were often made available. Additionally, our 2000 review of state provisions and practices related to accessible voting systems and accommodations in the voting room revealed that provisions to accommodate individuals with disabilities varied from state to state and may vary widely in their implementation. A 2004 GAO report also found transportation gaps in meeting the needs of seniors, which may create a barrier to voting for many elderly voters, and a lack of data on the extent of unmet needs. Since the passage of HAVA and the subsequent 2004 election, we have identified a number of reported efforts taken to improve voting access for people with disabilities. In particular, our 2006 report on election systems shows a marked increase in state provisions addressing the accessibility of polling places, voting systems, and alternative voting methods. However, the degree of change in accessibility is difficult to determine, in part because thousands of jurisdictions have primary responsibility for managing elections and ensuring an accurate vote count, and the complexity of the election system does not ensure that these provisions and reported practices are reflective of what occurs at polling places on election day. Understanding and addressing accessibility gaps represent enormous tasks for state and local election officials who are challenged by the multiplicity of responsibilities and requirements they must attend to within resource constraints. At the same time, as the population ages and the percentage of voters with disabilities expands, the expectation of accommodation and assistance to participate in this basic civic exercise will grow, making accessibility a key performance goal for our election community.
The Bureau puts forth tremendous effort to conduct an accurate count of the nation’s population. However, some degree of coverage error in the form of persons missed or counted more than once is inevitable. Two types of errors that can affect the accuracy of the enumeration are the omission of persons who should have been counted and erroneous enumerations of persons who should not have been counted. Historically, undercounts have plagued the census, although, according to the Bureau, they have generally diminished since 1940. For the 2000 Census, for the first time in its history, the Bureau reported a slight net overcount of approximately 0.5 percent or about 1.3 million people. However, as shown in figure 1, coverage errors were not evenly distributed through the population. For example, there was an overcount of non-Hispanic Whites, and an undercount of non-Hispanic Blacks. Nevertheless, figure 1 also shows the strides the Bureau made in reducing the undercount in the 2000 Census compared to 1990. Importantly, the national net overcount of about 0.5 percent does not mean that 99.5 percent of the population was counted correctly in 2000. In fact, the number of persons who were counted twice in the census was partially offset by the number of persons who were missed by the census. We have long maintained that the sum of these numbers—known as gross error (rather than the difference between the two numbers or net error)— provides a more comprehensive measure of total error in the census. Participation in the census, as measured by the mail return rate, also affects the accuracy of census data. The Bureau calculates mail return rates as the percentage of questionnaires the Bureau receives from occupied housing units in the mail-back universe. Although individuals who fail to mail back their census forms might be counted by an enumerator during a subsequent operation called nonresponse follow-up, high mail return rates are critical to quality data. A Bureau evaluation of the 2000 Census found that responses from mail returns tend to be more accurate than those obtained during nonresponse follow-up. Historically, return rates have declined. According to the Bureau, in 1970, for example, the overall mail return rate was 87.0 percent; in 1980, 81.3 percent; and in 1990 and 2000, 74.1 percent. Importantly, as shown in figure 2, during the 2000 Census, differentials existed in the mail return rates of different demographic groups. For example, Whites had a higher mail return rate (77.5 percent) than the rate for all groups (74.1 percent), while nearly every other demographic group had lower return rates than the overall mail return rate. The lowest mail return rates were those of Pacific Islanders (54.6 percent) and those of two or more races (57.7 percent). Maintaining or increasing mail return rates, especially minority return rates, represents an important opportunity for the Bureau to improve the quality of census data. In designing the 2010 Census, the Bureau recognized the importance of including a number of operations aimed at improving coverage and reducing the differential undercount. Three such efforts that I will highlight in my remarks today are (1) a complete and accurate address list, (2) an Integrated Communications Campaign to increase awareness and encourage participation, and (3) special enumeration programs targeted toward historically undercounted populations. These activities, along with a number of others planned for 2010, will position the Bureau to reduce the undercount. At the same time, each faces particular challenges and uncertainties that I will describe later in my statement. The foundation of a successful census is a complete and accurate address list and the maps that go with it. The Bureau’s Master Address File (MAF) is the inventory of the nation’s roughly 133.7 million housing units. In so far as it is used to deliver questionnaires as well as to organize the collection and tabulation of the data, the MAF serves as the basic control for the census. The Bureau develops its address list and maps over the course of the decade using a series of operations that sometimes overlap to increase the accuracy of the list of all housing units are included. These operations include partnerships with the U.S. Postal Service and other federal agencies; state, local, and tribal governments; and local planning organizations. Three operations that can help include the hard-to-count are the Bureau’s Local Update of Census Addresses (LUCA) program, address canvassing, and Group Quarters Validation. The LUCA program gives state, local, and tribal governments the opportunity to review and update the list of addresses and maps that the Bureau will use to deliver questionnaires within those communities. According to Bureau officials, LUCA helps identify hard-to-count populations and “hidden” housing units such as converted basements because local governments might know where such dwellings exist and have access to local data and records. In October 2008, the Bureau is scheduled to complete its reviews of participants’ LUCA submissions and update the MAF and a related geographic database used for maps. In the address canvassing operation, thousands of temporary Bureau employees known as listers verify the addresses of all housing units— including those addresses provided by localities in LUCA—by going door to door across the country. As part of this effort, listers add addresses that might not be in the Bureau’s database. To help find hidden housing units it might otherwise miss, listers ask if there is more than one residence at a particular address, or to look for clues such as an outbuilding or two mailboxes or utility meters that could indicate additional households. Indeed, as shown in the picture on the left in figure 3, someone could be living in what appears to be a storage shed. Likewise, in the picture on the right, what appears to be a small, single-family house could contain another apartment as suggested by its two doorbells. such as single-family houses, apartments and mobile homes, the 2000 decennial enumerated over seven million people living in group situations such as college dormitories, nursing homes, migrant labor camps, prisons, and group homes, collectively known as “group quarters”. Some group quarters, such as seasonal and migrant labor camps, can be difficult to locate because they are sometimes fenced-in or in remote locations away from main roads. The Bureau encountered a number of problems when enumerating group quarters during the 2000 Census. For example, in 2000, communities reported instances where students in college dormitories were counted twice and prison inmates were counted in the wrong county. Additionally, group homes are sometimes difficult for census workers to spot because, as shown in figure 4, they can look the same as conventional housing units. Since 1970, the Bureau has conducted a separate operation to enumerate the group quarters population. For 2010, the Bureau has plans to conduct Group Quarters Validation to validate the addresses found in the Address Canvassing operation and collect information about the type of group quarters. The Bureau’s Integrated Communications Campaign is designed to increase the mail response rate, improve cooperation with enumerators, enhance the overall accuracy of the census, and reduce the differential undercount. The Bureau estimates it will spend $410 million on the Integrated Communications Campaign for the 2010 Census. In September 2007, the Bureau awarded its communications contract to DraftFCB, a communications firm hired to orchestrate a number of communications activities for the 2010 Census. DraftFCB’s approach includes a specific focus on undercounted populations. As one example, the contractor worked with the Bureau to segment the nation’s population into distinct “clusters” using socioeconomic, demographic, and other data from the 2000 Census that are correlated with a person’s likelihood to participate in the census. Each cluster was given a hard-to-count score and the Bureau’s communications efforts are to be targeted to those clusters with the highest scores. The four clusters with the highest hard- to-count scores made up 14 percent of the nation’s occupied housing units based on data from the 2000 Census, and included the following demographic characteristics: renters, immigrants, non-English speakers, persons without higher education, persons receiving public assistance, and persons who are unemployed. Targeting the Bureau’s communications campaign to hard-to-count populations will help the Bureau use its resources more effectively. This will be important because in constant 2010 dollars, the Bureau will be spending less on communications for the 2010 Census ($410 million) compared to the 2000 Census ($480 million). The campaign strategy will be based on the theme “It’s In Our Hands”. According to the Bureau, this approach reflects a marketplace trend where communications are becoming more two-way or participatory, and can be seen, for example, in people creating their own content on the World Wide Web. The goal of the strategy is to encourage personal ownership and involvement that spreads the word about the census. The Bureau believes this approach will be more effective than if the message came from the government talking to the public. Further, the generic theme will be tailored to specific groups. For example, outreach targeted to families might carry the message, “The education of our children. . .It’s in our hands,” while the economically disadvantaged might receive “The power to matter. . .It’s in our hands.” The communications campaign consists of (1) paid media including national, local, outdoor, and online advertisements; (2) earned media and public relations such as news releases, media briefings, special events, podcasts, and blogs; (3) Census in Schools, a program designed to reach parents and guardians through their school age children, and (4) partnerships with key national and local grassroots organizations that have strong connections to their communities. Although the effects of the Bureau’s communication efforts are difficult to measure, the Bureau reported some positive results from its 2000 Census marketing efforts with respect to raising awareness. For example, four population groups—non-Hispanic Blacks, non-Hispanic Whites, Asians, and Native Hawaiians indicated they were more likely to return the census form after the 2000 Census Partnership and Marketing Program than before its onset. However, the Bureau also reported that the 2000 Census Partnership and Marketing Program had mixed success in favorably impacting actual participation in the census. Of the various campaign components, the Census in Schools and partnership programs are specifically aimed at hard-to-count populations. The Census in Schools program provides curriculum and teaching materials that introduce students to the purpose and importance of the census as well as census activities and products. The program is also designed to engage students to encourage their parents to complete and return their census questionnaires. According to Bureau officials, although the Census in Schools program is not as extensive as the one conducted in the last decennial, they made a number of changes based on lessons learned from the 2000 Census. For example, the program will spend less on printing and base their 2010 Census materials on materials used for the 2000 Census rather than create new materials from scratch. Moreover, similar to 2000, the Bureau is not reaching out to all schools but instead plans to target schools with large hard-to-count populations. Lower grades will be targeted as well, as Bureau officials believe their message has more traction with younger students. Under the partnership program, the Bureau plans to hire specialists to collaborate with local individuals and organizations, leveraging their knowledge and expertise to increase participation in the census within their communities. Partnership specialists are to be trained in, and help implement, various aspects of the census, as well as to reach out to key government and community leaders and gain commitments from community organizations to help the Bureau execute the enumeration. The Bureau operates a wide range of special enumeration programs—such as Be Counted, Questionnaire Assistance Centers (QAC) and Service- Based Enumeration—that target hard-to-count populations. Other activities, such as offering in-language questionnaires and replacement questionnaire mailings for nonresponding households, can help increase participation in non-English speaking populations as well as residents in areas with historically low responses rates. The Bureau developed the Be Counted program to enumerate people who believe they did not receive a census questionnaire, or were otherwise not included in the census. The Be Counted form is a questionnaire intended to be placed in public locations such as stores, libraries, and other places where people congregate (see figure 5). QAC staff help people complete their Be Counted forms as well as other census forms. Census officials reported that approximately 560,000 people were enumerated through the Be Counted program in 2000 that might have otherwise been missed. Additionally, a Bureau evaluation found that Be Counted forms were more likely to include members of minority groups and children—two traditionally undercounted populations—when compared to the traditional mail forms. Plans for the 2010 decennial include 30,000 QAC sites (a 25 percent increase over the 2000 Census) and 40,000 Be Counted sites, which are oftentimes co-located with QACs but can be stand-alone sites. Partnership specialists are to help determine the location of the sites, which are to be operational for 4 weeks during the 2010 Census. The Be Counted forms are to be available in English, Chinese, Korean, Russian, Spanish, and Vietnamese. The Bureau developed the Service-Based Enumeration program (SBE) for the 2000 Census to provide the homeless and others without conventional housing an opportunity to be included in the census. The program involves visiting selected service locations such as shelters, soup kitchens and regularly scheduled mobile food vans that serve people without conventional housing. The Bureau reported that during the 2000 Census, the large percentages of historically undercounted populations were among the 171,000 people in emergency and transitional shelters enumerated through the program. For 2010, the Bureau plans to conduct address list updates of SBE locations by obtaining information about SBEs from the Internet and soliciting information from government agencies and advocacy organizations. The Bureau intends to notify respondents through the Integrated Communications Campaign that if a questionnaire in one of the 5 languages other than English (Chinese, Korean, Russian, Spanish, or Vietnamese) is needed, the respondent should call the number provided on the questionnaire. The Bureau plans to provide language assistance guides for 59 languages, an increase from 49 languages in 2000. New in 2010, the Bureau plans to send bilingual questionnaires to approximately 13 million households that are likely to need Spanish assistance, as determined by analyzing recent data from the American Community Survey (a related Bureau survey program). Moreover, for 2010, the Bureau plans a multi- part approach for replacement mailings that includes a blanket mailing of approximately 25-30 million replacement questionnaires to census tracts with low response rates several weeks after the initial questionnaire mailing. Although each of the operations I’ve described can position the Bureau to address the undercount, they also face challenges and uncertainties that, if not adequately resolved, could reduce the effectiveness of the Bureau’s efforts. For example, with respect to address canvassing, the Bureau plans to provide listers with GPS-equipped handheld computers (HHC) to verify and correct addresses. Consequently, the performance of the HHCs is critical to the accurate, timely, and cost-effective completion of address canvassing. However, the Bureau’s ability to collect and transmit address and mapping data using the HHC is not known. For example, the 2008 Dress Rehearsal—which was an opportunity for the Bureau to conduct development and testing of systems and prepare for the 2010 Census— revealed a number of technical problems with the HHC that included freeze-ups and data transmission issues. The problems with the HHC prompted the Bureau to make major design changes, and a limited field test is scheduled for December 2008 (GAO is making plans to observe this test). However, if after this test the HHC is found to be unreliable, the Bureau will have little time to make any refinements. Operations that were not tested during the Dress Rehearsal also introduce risks. These operations include the Be Counted program, Service-Based Enumeration, and Group Quarters Enumeration. Although the Bureau employed these operations during the 2000 Census, the Dress Rehearsal afforded the Bureau an opportunity to see how they might perform in concert with other activities planned for 2010, as well as identify improvements that could enhance their effectiveness. The Integrated Communications Campaign faces its own set of challenges, chief among which is the long-standing issue of converting awareness of the census to an actual response. As a rough illustration of this challenge, various polls conducted for the 2000 Census suggested that the public’s awareness of the census was over 90 percent. Yet, as noted earlier, (1) the actual return rate was much lower—around 74 percent of the nation’s households, and (2) the Bureau’s evaluation of the 2000 Census Partnership and Marketing Program found that it only had mixed success in encouraging actual participation. With respect to the partnership program, the Bureau plans to have 144 partnership staff, including specialists, on-board nationwide by the end of September 2008, and ramp up to 680 partnership staff by 2010. According to Bureau officials, although this level of staffing is about the same as for the 2000 Census, the Bureau believes it is sufficient, and plans to deploy the partnership specialists more strategically by allocating more partnership specialists to regions with large hard-to-count populations. For example, the Atlanta region, (which includes Florida, Alabama, and Georgia), had 50 partnership specialists in 2000, but is to receive more than 70 partnership specialists in 2010. Although the strategic deployment is a reasonable approach, the impact of the reallocation on those regions that will receive fewer partnership specialists is unclear. Our evaluation of the 2000 Census Partnership Program found that there were mixed views regarding the adequacy of specialists staffing levels. Although partnership specialists we spoke to generally agreed that the Bureau hired enough specialists to carry out their activities, the managers of local census offices we interviewed noted that the partnership specialists' heavy workload may have limited the level of support they were able to provide. In 2010, to the extent that partnership specialists in regions with lower staffing levels wind up working with as many or more groups compared to 2000, or need to cover large geographic areas, they could find themselves thinly spread. Our observations during the 2000 Census highlighted some best practices that appeared to be key to successful partnership engagements, and might help the Bureau refine its partnering efforts in 2010. For example, best practices for partners include (1) identifying ‘census champions’ (i.e., people who will actively support the census and encourage others to do so), (2) integrating census-related efforts into partners’ existing activities and events, and (3) leveraging resources by working with other partners and customizing census promotional materials to better resonate with local populations. For the Bureau, best practices include (1) providing adequate and timely information, guidance, and other resources to local partners on how they can support the census; (2) maintaining open communications with partners; and (3) encouraging the early involvement of partners in census activities. Another challenge lies in staying on schedule. In order to meet legally mandated data reporting requirements, census activities need to take place at specific times and in the proper sequence. For example, the Group Quarters Validation operation needs to be completed after the Address Canvassing operation; the Questionnaire Assistance Centers need to be properly staffed, equipped, and opened by a particular date; advertising needs to be synchronized with various phases of the enumeration; and the questionnaires and replacement mailings all need to be carried out at the right time. Given the tight deadlines, small glitches could cascade into significant problems with downstream operations. Another challenge will be to develop management information systems capable of tracking key operations to enable the Bureau to quickly address trouble spots. The Bureau did this successfully in 2000 with the system it used to track local census offices’ progress in meeting their recruiting goals. At those offices where recruiting was found to be lagging, the Bureau was able to quickly raise pay rates and take other actions that enabled the Bureau to meet its goal. Less successful was the management information system used to track the Bureau’s partnership efforts in 2000, which was found to be slow and not user-friendly, among other shortcomings, which limited its use as an effective management tool. For 2010, the Bureau intends to use a Web-based system that will enable it to manage the partnerships in real-time and determine, among other things, whether staff need to be redirected or reallocated. Our past work indicates that the accuracy of state and local population estimates may have an effect, though modest, on the allocation of grant funds among the states. Many of the formulas used to allocate grant funds rely upon measures of population, often in combination with other factors. In our June 2006 report, we analyzed the sensitivity of Social Services Block Grants (SSBG) to alternative population estimates, such as those derived by statistical methods that incorporate the number of people that were overcounted and undercounted in the census, rather than the actual census. To analyze the prospective impact of estimated population counts on the money allocated to the states through SSBG, we recalculated the state allocations using statistical estimates of the population that were developed for the 1990 and 2000 Censuses in lieu of the actual census numbers. We used the population estimates, which are based on the 2000 Census counts, and then adjusted these population estimates by the difference between the 2000 official population counts and the statistical estimates of the population. We selected SSBG for our analysis because the formula for this block grant program, which was based solely on population, and the resulting funding allocations, were particularly sensitive to such alternative population estimates. In short, as shown in figure 6, in 2004, 27 states and the District of Columbia would have gained $4.2 million and 23 states would have lost $4.2 million of the $1.7 billion in SSBG funding. Based on our simulation of the funding formula for this block grant program, the largest percentage changes were for Washington, D.C., which would have gained 2.05 percent (or $67,000) in grant funding, and Minnesota, which would have lost 1.17 percent (or $344,000). While the shifting of these funding amounts may not seem significant in total, using an inaccurate count to allocate grant funds could adversely impact some states’ ability to provide services to their residents. Reducing the undercount will alleviate this potentially adverse impact to states. This simulation was done for illustrative purposes only—to demonstrate the sensitivity of government programs to alternative population estimates that incorporate the number of people that were overcounted and undercounted in the census. Only the actual census numbers should be used for official purposes. This illustration further emphasizes the importance of an accurate decennial count. The Bureau’s strategy for reducing the differential undercount appears to be comprehensive, integrated, and based on lessons learned from the 2000 Census. If each of the various components is implemented as planned, they will likely position the Bureau to address the differential undercount. Still, the various programs we examined are generally in the planning or early implementation stages, and a number of uncertainties and challenges lie ahead as the activities become operational. Indeed, past experience has shown how the decennial census is an enormous and complex endeavor with numerous moving parts, and any shortcomings or missteps can have significant consequences for the ultimate cost or accuracy of the enumeration. With this in mind, the success of the Bureau’s efforts aimed at the hard-to- count will depend in large part on the extent to which they (1) start and finish on schedule, (2) are implemented in the proper sequence, (3) are adequately tested, and (4) receive appropriate staffing and funding. It will also be important for the Bureau to have a real-time monitoring capability to track the progress of the enumeration, target the Bureau’s resources to where they are most needed, and to quickly respond to various contingencies that could jeopardize the accuracy or cost of the count. In the months ahead, it will be important for Congress and the Bureau to continue to focus on these issues, as well as to be alert to newly emerging challenges. Chairman Carper, Senator Coburn, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions you may have. If you have any questions on matters discussed in this testimony, please contact Robert Goldenkoff at (202) 512-2757 or by email at goldenkoffr@gao.gov. Other key contributors to this testimony include Ronald Fecso, Chief Statistician; Signora May, Assistant Director; Nicholas Alexander; Thomas Beall; Sarah Farkas; Richard Hung; Andrea Levine; Lisa Pearson; Sonya Phillips; Timothy Wexler; and Katherine Wulff. 2010 Census: Census Bureau’s Decision to Continue with Handheld Computers for Address Canvassing Makes Planning and Testing Critical. GAO-08-936. Washington, D.C.: July 31, 2008. 2010 Census: Plans for Decennial Census Operations and Technology Have Progressed, But Much Uncertainty Remains. GAO-08-886T. Washington, D.C.: June 11, 2008. 2010 Census: Bureau Needs to Specify How It Will Assess Coverage Follow-up Techniques and When It Will Produce Coverage Measurement Results. GAO-08-414. Washington, D.C.: April 15, 2008. 2010 Census: Population Measures Are Important for Federal Funding Allocations. GAO-08-230T. Washington, D.C.: October 29, 2007. 2010 Census: Diversity in Human Capital, Outreach Efforts Can Benefit the 2010 Census. GAO-07-1132T. Washington, D.C.: July 26, 2007. 2010 Census: Census Bureau Has Improved the Local Update of Census Addresses Program, but Challenges Remain. GAO-07-736. Washington, D.C.: June 14, 2007. 2010 Census: Redesigned Approach Holds Promise, but Census Bureau Needs to Annually Develop and Provide a Comprehensive Project Plan to Monitor Costs. GAO-06-1009T. Washington, D.C.: July 27, 2006. Federal Assistance: Illustrative Simulations of Using Statistical Population Estimates for Reallocating Certain Federal Funding. GAO- 06-567. Washington, D.C.: June 22, 2006. 2010 Census: Planning and Testing Activities Are Making Progress. GAO-06-465T. Washington, D.C.: March 1, 2006. 2010 Census: Basic Design Has Potential, but Remaining Challenges Need Prompt Resolution. GAO-05-9. Washington, D.C.: January 12, 2005. Decennial Census: Lessons Learned for Locating and Counting Migrant and Seasonal Farm Workers. GAO-03-605. Washington, D.C.: July 3, 2003. Decennial Census: Methods for Collecting and Reporting Data on the Homeless and Others without Conventional Housing Need Refinement. GAO-03-227. Washington, D.C.: January 17, 2003. 2000 Census: Review of Partnership Program Highlights Best Practices for Future Operations. GAO-01-579. Washington, D.C.: August 20, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
An accurate decennial census relies on finding and counting people-- only once--in their usual place of residence, and collecting complete and correct information on them. This is a daunting task as the nation's population is growing steadily larger, more diverse, and according to the U.S. Census Bureau (Bureau), increasingly difficult to find and reluctant to participate in the census. Historically, undercounts have plagued the census and the differential impact on various subpopulations such as minorities and children is particularly problematic. GAO was asked to describe (1) key activities the Bureau plans to use to help reduce the differential undercount and improve participation, (2) the various challenges and opportunities that might affect the Bureau's ability to improve coverage in 2010, and (3) how different population estimates can impact the allocation of federal grant funds. This testimony is based primarily on GAO's issued work in which it evaluated the performance of various Census Bureau operations. The Bureau's strategy for reducing the undercount and improving participation in the 2010 enumeration appears to be comprehensive, integrated, and shaped by the Bureau's experience in the 2000 Census. If implemented as planned, the various activities the Bureau is developing should position the agency to address the undercount. Key operations include building a complete and accurate address list, implementing an Integrated Communications Campaign to increase awareness and encourage participation, and fielding special enumeration programs targeted toward historically undercounted populations. For example, the Bureau develops its address list and maps over the course of a decade using a series of operations that sometimes overlap to ensure all housing units are included. Among other activities, temporary census workers go door to door across the country in an operation called address canvassing to verify addresses. To help find hidden housing units, the Bureau's workers look for clues such as two mailboxes or utility meters that could indicate additional households. Likewise, the Bureau's communications campaign includes paid media, public relations, and partnerships with national and grassroots organizations, among other efforts, some of which will be targeted toward hard-to-count groups. Despite the Bureau's ambitious plans, a number of challenges and uncertainties remain. For example, the performance of the handheld computers that is critical to address canvassing has technical shortcomings, while the communications campaign faces the historical challenge of converting awareness of the census to an actual response. Further, success will depend in large part on the extent to which the various operations (1) start and finish on schedule, (2) are implemented in the proper sequence, (3) are adequately tested and refined, and (4) receive appropriate staffing and funding. It will also be important for the Bureau to have a real-time monitoring capability to track the progress of the enumeration, target its resources to where they are most needed, and to quickly respond to various contingencies that could jeopardize the accuracy or cost of the count. Our past work indicates that the accuracy of state and local population estimates may have an effect, though modest, on the allocation of grant funds among the states. Many of the formulas used to allocate grant funds rely upon measures of the population, often in combination with other factors. For example, we analyzed the sensitivity of Social Services Block Grants (SSBG) to alternative population estimates, rather than the actual census. We selected SSBG for our analysis because the formula, which was based solely on population, and the resulting funding allocations were particularly sensitive to alternative population estimates. Based on our simulation of the funding formula, 27 states and the District of Columbia would have gained $4.2 million and 23 states would have lost $4.2 million of the $1.7 billion in 2004 SSBG funding.
To examine ways CDRs have demonstrated the capacity to improve the quality and efficiency of physician care, we conducted internet searches and reviewed literature to identify studies assessing the impact of CDRs on quality and efficiency of physician care. We synthesized the results of these studies in terms of the type, scope, and magnitude of changes in quality and efficiency attributed to CDRs, and assessed the strength and limitations of the evidence produced by those studies. We also interviewed officials from seven organizations that operate existing CDRs, including regional health care collaboratives, and officials from a major health plan. These interviews generally included discussion of the type, scope, and magnitude of any changes in quality and efficiency of physician care that they have observed stemming from the activities of CDRs. We selected CDRs that cover patient care across a range of medical conditions, focusing on those that have operated for a number of years and on organizations that have extensive experience using CDR data. To examine HHS’s plans for requirements and oversight of qualified CDRs to maximize CDRs’ potential impact on the quality and efficiency of care, we interviewed HHS officials and reviewed HHS documents on the department’s plans for implementing the qualified CDR program, including both a proposed and final rule and the accompanying preambles, which were published in the Federal Register during the period of our review. In addition, we convened an expert meeting with the assistance of the National Academies’ Institute of Medicine (IOM) to discuss potential requirements for qualified CDRs, the advantages and disadvantages of these requirements, and the oversight that HHS could provide to qualified CDRs to promote the quality and efficiency of care. We worked with staff at IOM to identify experts to participate in the meeting. Generally, participants were chosen for their expertise in operating or launching a CDR, as health plan officials or other users of CDR data, as health IT professionals, or as clinical researchers. Representatives from the Centers for Medicare & Medicaid Services (CMS)—the HHS agency charged with implementing the program—and HHS’s Office of the National Coordinator for Health Information Technology (ONC) also attended the meeting. To ensure that participants represented a broad range of views and interests and that we fully understood those interests, we required that participants complete a conflict of interest form. See appendix I for a list of experts who participated in the meeting. We synthesized the experts’ comments at the meeting together with other relevant sources, including related published literature, to assess the potential impact of different program requirements and approaches to providing oversight of those requirements on the effectiveness of qualified CDRs in promoting quality and efficiency of care. To identify barriers, if any, to the development of qualified CDRs and actions HHS can take to minimize their impact, we interviewed officials and reviewed documents from CMS on its plans for implementing the qualified CDR program. We also obtained input from participants during the expert meeting on barriers to the development of qualified CDRs and on what support HHS could provide to reduce these barriers. We synthesized the experts’ comments at the meeting together with other relevant sources, including related published literature, to assess the potential impact of selected actions HHS could take on overcoming the identified barriers to the development of qualified CDRs. To examine the potential of health IT to enhance CDR operations and actions HHS can take to facilitate CDR use of health IT, we reviewed documents and interviewed officials from CMS and ONC on the agencies’ plans related to IT for the qualified CDR program. We also interviewed officials to determine how CDRs interact with electronic health record (EHR) systems used by many providers. In addition, we obtained input from participants during the expert meeting on the potential of health IT to facilitate CDR operations. In addition to receiving input from meeting participants, we also interviewed health IT experts who have been involved in applying health IT to the operations of existing CDRs. We conducted this performance audit from March 2013 to December 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Over the past 25 years, a broad range of entities—encompassing the federal Medicare and Medicaid programs, private health insurers, and various provider organizations—have created different systems for assessing physician performance, of which PQRS and CDRs are examples. Early efforts largely focused on the quality of care (i.e., the extent to which patients received care that was likely to improve their health status). More recently, the focus of many of these systems has expanded to include the efficiency of care (i.e., the extent to which high- quality care was provided without using more resources than necessary). In concert with these performance assessment systems, some public and private payers have begun to provide incentives to physicians based on their performance to stimulate improvement over time. These physician performance assessment systems have developed a wide range of performance measures. Some are process measures, which assess the extent to which physicians effectively implement clinical practices (or treatments) that have been shown to result in high-quality or efficient care. Others are outcome measures, which track the results of physician care, such as mortality, infections, and how patients experience that care. To assess performance on such measures, these systems have collected information from administrative data sets, including billing data, as well as from patient medical records and patient surveys. Measures used to assess physician performance are composed of a number of clinical data elements, or pieces of data, that must be collected in order to determine performance. For example, the performance measure endorsed by the National Quality Forum (NQF) for acute stroke mortality rate comprises two data elements—the number of stroke patients treated and the number of deaths among those patients.measures are more complex and require more data elements. Many of these assessment systems evolved independently and therefore are very different from one another. For example, there is great variability among existing CDRs, which range from those developed by medical specialty societies to those developed by regional health care improvement collaboratives. One of the longest-standing CDRs focused on physician care is the Society of Thoracic Surgeons’ (STS) Adult Cardiac Surgery Database, which was established in 1989 in response to HHS’s publication of mortality rates for individual thoracic surgery programs. According to the STS, HHS’s published rates were misleading because they had not been adjusted adequately for variations in the complexity of patients treated by different programs. The most complicated and highest-risk cases typically have the highest mortality rates, independent of the quality of the surgeon’s performance. So the STS developed nationally benchmarked performance data with empirically tested risk adjustment models based on detailed clinical variables. Since then, additional CDRs have been developed by medical specialty societies, such as the American College of Cardiology (ACC) and the American College of Surgeons, as well as by regional health care improvement collaboratives, such as Minnesota Community Measurement and the Wisconsin Collaborative for Healthcare Quality. The profusion of these different systems has created difficulties for those involved in using and maintaining them. For example, physicians, along with other providers, have found it burdensome to provide data on multiple performance measures to multiple public and private physician performance assessment systems, which has led to efforts to align these systems. For example, CMS has announced its intention to maximize the extent to which physicians can satisfy its different performance assessment programs by submitting one set of data. There have also been efforts to develop a consensus among public and private groups concerning top priority objectives for improvement. For example, in 2011 the Secretary of HHS issued a National Quality Strategy based on input from major health care stakeholders, which established six broad priority domains. However, efforts to bring various systems into greater alignment based on specific national priorities are complicated by the diversity of care that physicians provide. For example, primary care physicians treat patients with conditions that fall under nearly 400 different diagnostic categories, making it difficult to assess their performance appropriately with a limited number of measures. Collectively, specialist physicians also encompass a broad range of conditions and treatments. While some dimensions of quality and efficiency may apply broadly across most physicians, such as the extent to which they coordinate their care effectively with a patient’s other care providers, other important aspects of physician performance are distinct for different medical conditions. Physicians are increasingly using EHRs to collect and report the data needed for performance assessments, in part as a result of direct governmental encouragement. HHS’s Medicare and Medicaid EHR programs provide incentive payments to eligible participants as they adopt, implement, or upgrade certified EHR technology and demonstrate its meaningful use. To receive these incentives, eligible providers, including physicians, must first adopt EHR technology that is certified specifically for the EHR Incentive Programs. Certified EHR systems must meet specific criteria, including having the ability to store data in a structured format to facilitate retrieval and use of the data by other systems, and having the capability to collect and report data on a large number of clinical quality measures defined by HHS. Physicians must then demonstrate that they are using their certified EHR systems in meaningful ways that can positively affect the care of their patients, including conducting quality assessments using some of the clinical quality measures. The EHR incentive programs, which began in 2011, are scheduled to be implemented in three stages. Stage 1 is focused primarily on data capture and sharing. Stage 2 is scheduled to begin in 2014 and will focus on improving selected clinical processes. Stage 3 is expected to begin in 2016 and to focus on improving quality, safety, and efficiency outcomes. CDRs have demonstrated a particular strength in assessing physician performance through their capacity to track and interpret trends in health care quality over time. This strength derives from the fact that CDRs typically collect extensive, standardized clinical data on large numbers of patients that provide more clinical detail than can be obtained from administrative data. Those clinical data provide the basis for developing sophisticated risk models, which enable CDRs to compare physician performance with appropriate adjustments for variations in the severity level and other attributes of the patients they treat. CDRs also collect extensive, standardized data on treatments provided to large numbers of patients over extended periods of time, which permits CDRs to explore in depth how treatment variations affect patient outcomes. Moreover, CDRs collect information about many different types of patients encountered by physicians, including those with complex combinations of medical conditions, who are often excluded from clinical research studies. This enables CDRs to analyze trends for the full population of patients that participating physicians actually treat, and examine variations across many different subgroups of patients. Studies examining outcomes reported by several long-established CDRs demonstrate the utility of CDR data for analyzing trends in both outcomes and treatments. For example, CDR data on treatment of acute myocardial infarction over 15 years show major increases in guideline-recommended treatment and a 39 percent reduction in overall mortality. Similarly, results from the STS Adult Cardiac Surgery Registry show substantial improvements in mortality and morbidity rates for coronary artery bypass graft surgery—24.4 percent lower mortality and 26.4 percent lower postoperative stroke over 10 years—that were linked to refinements in Data from other CDRs have highlighted clinical surgical techniques.areas where quality improvements have been more mixed, such as a medical oncology CDR that showed marked improvements over 5 years only on measures of whether providers had adopted new clinical practices, while other measures of clinical quality remained relatively high on some dimensions and low on others. Another study presented comparable results for a state-based CDR assessing primary care. It found major improvements over 6 years on some measures, such as kidney function monitoring for diabetic patients, but considerably less improvement on others, such as lipid testing and control for patients with coronary artery disease. CDR efforts to improve outcomes typically involve a combination of performance improvement activities, including feedback reports to participating physicians, benchmarking physician performance relative to that of their peers, and related educational activities designed to stimulate changes in clinical practice. Officials of the CDRs from which these results were reported described to us a range of educational materials and related activities that they have developed targeted to physicians who do relatively poorly on specific measures. They generally view these additional education activities as essential to achieving improved performance. Using a within-registry, national-level randomized controlled trial, one study of the STS registry demonstrated a positive impact from providing surgeons with educational materials targeted to key process measures related to cardiac surgery. Another study used CDR performance data to assess the effectiveness of a health plan’s interventions to promote changes in practice. Specifically, it found that physician groups participating in the health plan’s regional collaboratives had lower risk-adjusted mortality and better composite quality scores over time than physician groups in other states who participate in these CDRs. Compared with their efforts related to quality, CDRs have typically provided less insight on ways to improve the efficiency of care. For example, none of the studies of CDR results that we examined addressed changes in the cost of care directly. However, studies of CDRs focusing on surgical care reported improvements on several measures related directly or indirectly to the use of resources, such as rates of complication. The potential to draw inferences about costs from rates of complications was demonstrated by researchers affiliated with Michigan Blue Cross and Blue Shield. They used CDR data to estimate that their regional surgical collaborative had led to 2,500 fewer patients with surgical complications per year, which—when considered with the average cost of those complications—translated to annual savings of about $20 million. One reason why CDRs have not typically assessed the cost of care is that their data have usually been limited to information available from patient medical records recorded during the course of treatment, including patient risk factors, process measures concerning the treatments provided, and short-term outcomes such as inpatient mortality and morbidity. These data do not typically include information on costs as well as other information relevant to assessing longer term outcomes and changes in patient functioning (e.g., patient-reported outcomes). To obtain this information, CDRs need to turn to other data sources, as some have started to do. For example, the STS has begun, for specific research projects, to obtain Medicare claims data and merge those data with its own CDR data to examine costs and long-term outcomes. The results reported to date from existing CDRs are also limited in terms of their scope and capacity to determine the independent impact of CDRs on physician performance. Most of the studies examining the outcomes of CDRs that we found come from a relatively small number of CDRs run by certain medical specialty societies—including the STS for cardiac surgery, the American College of Surgeons for general and vascular surgery, and the American Society for Clinical Oncology for medical oncology—plus one ambulatory care CDR run by a regional health care improvement collaborative, the Wisconsin Collaborative for Healthcare Quality. Most of these are CDRs that have been in operation for close to a decade or more, and therefore have substantial longitudinal data from which to analyze trends over time. Even within the given specialty or region targeted by the CDR, the scope of care addressed by the studies we examined were typically limited. For example, the study of cardiac surgery focused on one procedure—coronary artery bypass graft surgery—while the study of ambulatory care in Wisconsin examined diabetes, coronary artery disease, and hypertension management plus three cancer screenings and a vaccine. Moreover, CDRs by design collect observational data with no predetermined designation of treatment and control groups, as would be done in a randomized controlled trial. Therefore, it is difficult to use CDR data to assess the independent effect of the CDR on physician performance, relative to other factors. A few studies have compared CDR results to control groups and found that the CDRs had at least a modest impact on outcomes relative to the controls. However, these analyses were limited to measures where the same data were available from other sources for both CDR patients and control groups, which means that the data used in those analyses did not have the clinical detail (or validation) of the CDR-collected data. Moreover, patients in the control groups also differed to some extent from patients in the CDRs in ways that may affect the results reported. HHS’s plans for CMS’s implementation of the qualified CDR program include little specificity on how CDRs will improve quality and efficiency. Setting key requirements, with greater specificity, for CDRs to become qualified could help promote improved quality and efficiency of care. In addition, effective oversight of these requirements depends on expert judgment to take account of variation among CDRs in their circumstances and opportunities for improvement. CMS’s plans for implementing the qualified CDR program, when it begins in 2014, offer little specificity concerning CDR objectives or results and provide substantial leeway for CDRs seeking to become qualified. CMS’s plans include certain minimal attributes for entities to be considered, such as having been in existence with at least 50 participants for no less than one year. In addition, CMS’s plans include a number of largely procedural performance requirements that qualified CDRs would have to satisfy. Among the most important are Data collection: A qualified CDR must collect and report to CMS data for at least nine quality measures, at least one of which must be an outcome measure. Collectively, these measures must cover at least The three of the six domains of HHS’s National Quality Strategy.measures should include patient data from multiple payers and be risk-adjusted, where appropriate. Data validation: A qualified CDR must attest to CMS that all the data it submitted were accurate and complete, submit a data validation strategy for verifying the accuracy and completeness of data it collected, perform the steps described in its validation strategy and provide CMS with evidence of successful results, and make available to CMS samples of patient data that CMS could use for its own audits of data validity. Data security: A qualified CDR must have a plan to maintain data security and privacy, have appropriate business associate agreements with participating physicians to satisfy federal patient privacy requirements, and use specified methods to transmit quality data to CMS in one of two specified data formats. Transparency: A qualified CDR must make publicly available information about its measures, including their supporting evidence or rationale, data elements, and criteria for including and excluding patients. Improvement activities: A qualified CDR must provide feedback reports to participating physicians on their performance at least four times a year, with benchmarks derived from the CDR’s own database or external sources. As a whole, CMS’s plans provide substantial leeway in key areas regarding what could constitute satisfactory CDR performance. For example, CMS’s plans grant CDRs wide latitude in selecting which measures they will collect, as long as they cover three of six National Quality Strategy domains and at least one is an outcome measure. Similarly, CMS’s plans state that CDR feedback to participating physicians should include benchmark information, but CDRs will have discretion to determine the appropriate benchmark, either derived from the CDR’s own data or drawn from an external source. The extensive leeway in CMS’s plans would allow diverse registries to become qualified. Because registries are typically designed to focus on a particular set of patients, defined by medical condition, type of treatment received, or geographical location, they will inevitably vary substantially in their opportunities to promote quality and efficiency of care. Therefore, the broad parameters in CMS’s plans are compatible with a wide range of CDRs potentially addressing diverse types of physician care. However, the flexible approach for qualifying CDRs may at the same time provide minimal impetus to CDRs to take full advantage of their specific opportunities to promote the quality and efficiency of care. For example, CMS’s plans do not include a process or criteria for assessing the extent to which the measures selected by a CDR in fact address the key opportunities that could result in improved care for its particular target population. In addition, CMS has not provided any details on how it plans to interpret or enforce program requirements for CDRs. For example, CMS has not described what CDRs would need to do to make their data validation strategies acceptable to CMS. Nor has it described the minimal thresholds of accuracy and completeness that CDRs would need to attain, which could help CMS to audit CDR data as necessary in the future. CMS has also not described how it intends to provide oversight to ensure that CDRs comply with the requirements, beyond having CDRs submit a self-nomination statement, initially on an annual basis. Greater specificity in both the requirements for CDRs and the mechanisms for enforcing them is likely to develop with time. CMS has not yet implemented the qualified CDR program, but in the preamble that accompanied its final rule, CMS stated that, as it gains programmatic experience, it anticipates making changes in future rulemaking to the requirements for becoming a qualified CDR. However, CMS has not yet articulated the direction or ultimate goals that it seeks to accomplish through this evolution, except that, to the extent possible, it will seek to align the requirements for CDRs more closely over time with requirements for other federal quality programs. We identified several key requirements for qualified CDRs that, based on our synthesis of the input from experts at the meeting we convened with the assistance of IOM together with other relevant sources, would contribute to improved quality and efficiency of care for Medicare patients. Such requirements could affect quality and efficiency both by determining which entities are designated as qualified CDRs and by encouraging certain activities by CDRs after they are designated as qualified. We identified the following key requirements and assessed the extent to which they are addressed by CMS’s plans for implementing the qualified CDR program: 1. Performance measures to address key opportunities: Having qualified CDRs focus their data collection on performance measures that address specific opportunities to improve quality and efficiency for each CDR’s target population enhances their effectiveness in promoting quality and efficiency overall. Input from experts and other relevant sources indicates that appropriate performance measures would encompass broadly defined measures of patient outcomes, such as patient experience and function, and consider the appropriateness of the chosen treatment, compared to available alternatives. Rationale: For any given patient population, defined by medical condition, treatments received, geographic location, or other attribute, there are a wide range of existing or potential performance measures on which a CDR could focus. If those measures are not well selected, they may divert the attention of participating physicians to clinical issues that are overly narrow or fail to uncover actual differences in quality and efficiency. Every CDR faces the choice of where it should focus its data collection and analytical resources, though the specific clinical issues that offer the greatest opportunity for improved quality and efficiency will vary from one CDR to another, depending on their target populations and the depth of the evidence base currently established for its field of clinical practice. CDRs need to make strategic choices that make the most of existing knowledge and strategies for improving quality and efficiency while also helping to incrementally expand that evidence base over time. Comparison to CMS plans: CMS plans to leave measure selection to the discretion of each CDR, within the broad parameters of covering three National Quality Strategy domains and including at least one outcome measure. CMS has not described expectations regarding how well targeted those measures are relative to the specific quality or efficiency deficiencies of that CDR’s target population. Nor has CMS required that CDRs collect information on patient experience and functional outcomes or address the appropriateness of treatments provided. 2. Core set of measures: Input from experts and other relevant sources indicates that having qualified CDRs collect data for a minimum set of core performance measures with standardized definitions and specifications as part of their overall data collection effort would enable CDRs to address broad, shared objectives regarding both quality and efficiency. Rationale: While CDRs are free to collect a wide range of measures reflecting quality and efficiency opportunities in their particular target populations, there are certain measures that apply across the patient populations covered by different CDRs. Some of these relate to national-level quality improvement objectives such as improving care coordination. In order for CDRs to contribute to these broader national priorities, CDRs could collect the relevant data for their patients in a standardized fashion that permits sharing and aggregating of the data across CDRs and other sources of quality data. A core measure set would align CDRs with key national level priorities on quality and efficiency, while still allowing for innovation and permitting CDRs to collect other data that address regional or specialty-specific concerns. Comparison to CMS plans: CMS has not established common measures across qualified CDRs. To the extent that registries report on different measures within the six National Quality Strategy domains, they would not produce results that could be aggregated to assess progress overall. In addition, results could not be compared across different CDRs, which may be useful, for example, to examine cardiac patients receiving medical or surgical treatments. 3. Data accuracy and completeness: Input from experts and other relevant sources indicates that the credibility of CDR results relies on having CDRs implement a systematic and rigorous process for ensuring the accuracy and completeness of the data they collect and analyze. Rationale: Assessing physician performance with inaccurate or incomplete data is likely to produce misleading and invalid results. Therefore several existing CDRs have instituted regular external audits of the data submitted to their databases. However, the appropriate form of systemic and rigorous checking of the data may vary depending on the CDR’s focus and method of data collection. For example, one long-standing CDR has annual external audits conducted of the data it collects, auditing 8 percent of participating physicians in 2013, to ensure that reported data are accurate compared to the original records from which the data were collected. Auditors also check hospital logs to make sure that data on all eligible cases were submitted. By contrast, an official for a different CDR that relies on electronic data extraction from EHRs described the use of statistical methods to identify outliers in the data that may indicate a data collection error. Comparison to CMS plans: CMS’s plans state that CDRs must submit a data validation strategy that is acceptable to CMS, but CMS has not described either the approach or the intensity of the CDR efforts expected. CMS has also not detailed how it would evaluate the strategies for acceptability, or how it might evaluate CDR data for validity. Because data validation tends to be a labor- intensive and expensive activity for CDRs, the absence of specific validation requirements once the program is implemented may cause some CDRs to curtail their validation efforts. 4. Participation levels: Input from experts and other relevant sources indicates that CDRs need to achieve a substantial level of participation to ensure that their results represent the physicians that make up their target population, but for newly established CDRs it often takes time to achieve this level of participation. Rationale: Registries that recruit a relatively low proportion of physicians within their target population may not have the data needed to support accurate risk adjustments and benchmarking. However, historically it has taken time for registries to become well established. Rather than setting a minimum proportion, a requirement to disclose the level of participation in a CDR may partially compensate for low levels of participation by alerting potential users of the data to take those limitations into account. Comparison to CMS plans: CMS has not addressed the issue of how well a CDR represents physicians treating its targeted patient population. The planned required minimum of 50 participants may constitute only a very small fraction of those physicians. However, CDRs may use benchmarks developed with data from external organizations, such as the National Committee for Quality Assurance, which could help registries with low participation to achieve more accurate benchmarking. 5. Performance improvement: Input from experts and other relevant sources indicates that CDRs improve quality and efficiency by supplementing timely feedback on physician performance with information that targets needed practice changes. Rationale: The potential for CDRs to promote quality and efficiency improvements depends in large part on their ability to provide physicians with “actionable information” that identifies not only where performance is deficient but also specific changes in behavior that a physician could take to improve their outcomes. For example, one CDR official told us that in addition to performance feedback and benchmarking, the CDR teaches, provides leadership, and supports hospitals and providers in quality improvement and change management. Another CDR official explained that they use CDR data to determine where additional tools are needed for physician development. The CDR provides virtual education programs and develops improvement tools for providers. The CDR officials we spoke with generally agreed that it is vital for CDRs to use data to inform quality improvement initiatives, rather than simply collecting the data. Comparison to CMS plans: CMS’s plans would require that qualified CDRs provide participating physicians at least four feedback reports per year with benchmarks of some kind, but they do not require qualified CDRs to undertake any quality initiatives beyond feedback reports. 6. Public reporting: Input from experts and other relevant sources indicates that having CDRs provide some form of public reporting can promote greater quality and efficiency. However, to avoid unintended adverse effects the public reporting may be limited to selected measures that are particularly useful to patients and/or be phased in over time. Rationale: Public reporting can often help to motivate quality and efficiency improvement, but under some circumstances may also diminish physicians’ receptivity to negative information and their willingness to participate in CDRs. For example, a CDR may encourage competing providers to collaboratively examine their performance data to identify patterns and sources of suboptimal care. Some of these providers may not be willing to participate in such quality improvement efforts if doing so involves publicly reporting data that could put them at a competitive disadvantage. In this way, differences across CDRs in the kind of data they collect and how they use them may affect the results available to be shared with the public and the possible ramifications of doing so. Comparison to CMS plans: CMS initially proposed that qualified CDRs have a plan to publicly report results for individual physicians, with benchmarks. In response to public comments that raised concerns about the cost and time associated with public reporting, CMS did not adopt this requirement. Instead, the preamble to the final rule states that CMS encourages qualified CDRs to move toward public reporting, and that it will revisit this proposed requirement in the future. 7. Demonstrating results: Input from experts and other relevant sources indicates that CDRs are more likely to achieve improvements in physician performance if they have specific incentives to do so. Therefore, requiring qualified CDRs to demonstrate improvement over time on the quality and efficiency measures that they collect would help to focus their attention on achieving results. Rationale: Both the financial incentives that CDRs will extend to participating physicians and the flexibility allowed in how they choose to operate are intended to promote improved quality and efficiency of care. Therefore, successful CDRs will begin to realize their potential to improve care by demonstrating results on key improvement opportunities for the CDR’s target population. Because those opportunities vary across CDRs, the magnitude of improvement that can be expected of different CDRs will also vary. At a minimum, each CDR has the ability to identify its key targets for improvement and begin to make incremental progress toward them. Comparison to CMS plans: CMS has not described any expectations regarding the results of qualified CDR activities. To effectively implement requirements for qualified CDRs that focus on improving quality and efficiency, expert judgment is needed to interpret those requirements in accordance with the CDRs’ differing circumstances and opportunities for improvement. In particular, according to experts and other relevant sources, assessing both potential and actual effects of individual CDRs on quality and efficiency of care requires an understanding of what those particular CDRs could do to change physician practice and achieve improved performance. This will depend on the state of clinical research and other factors that affect what is currently known about opportunities to improve quality and efficiency in each CDR’s area of medical practice. For example, expert judgment is needed to determine whether the particular set of measures adopted by a CDR effectively addresses the key quality and efficiency opportunities for improvement for the target population of that CDR. In addition, expert judgment could help to determine what adjustments to make in performance expectations for CDRs that have only recently been established, which compared to CDRs that have been in operation over a longer period and have achieved a higher level of physician participation, may need time to build their capacity to promote improvements in quality and efficiency. Experts and other sources we consulted suggest a range of potential sources that CMS could draw on to provide this expert judgment for assessing qualified CDRs. They include relying on staff within CMS, contracting with outside experts, and delegating certain aspects of oversight to independent organizations. For example, one variation of the latter option might be to set up a deeming process to select one or more outside entities that meet CMS-determined criteria for carrying out all or part of this oversight function. Each of those options has strengths and limitations in terms of, for example, its resource requirements, adaptability to varying situations, and responsiveness to agency priorities (such as promoting alignment with other quality programs). CMS could consider these different strengths and limitations in building an organizational structure for monitoring qualified CDRs that draws on expertise from one or more of these sources. Based on our synthesis of the input from experts at the meeting we convened with the assistance of IOM together with other relevant sources, there are several actions that HHS could take that could help reduce potential barriers to the development of qualified CDRs. Reducing these barriers would make it easier for qualified CDRs to get started and expand the scope of their activities and thereby improve the quality and efficiency of physician care provided to Medicare beneficiaries, according to the input from experts and other relevant sources. Concerns about Complying with Privacy Regulations: Some CDR officials report that the recruitment of new participants is made more difficult by widespread concerns among physicians that submission of data to a CDR risks violation of the Health Insurance Portability and Accountability Act Under the Privacy Rule, protected health (HIPAA) Privacy Rule.information may be used or disclosed only for specified permitted purposes. Because CDR data are often used for the purposes of both quality improvement activities and clinical research, it may often not be clear whether, or which, permitted use or disclosure applies. This lack of clarity can make it more difficult for CDRs to collect and analyze clinical data for either purpose. CDR officials stated that a particular concern of potential CDR participants is the perceived need for individual patient authorization or approval by an Institutional Review Board (IRB) to ensure compliance with HIPAA requirements. CMS has indicated that CDRs must enter into an appropriate business associate agreement with participating physicians that provides for receipt of patient data and public disclosure of quality measure results.physician concerns regarding the perceived need to meet HIPAA Privacy Rule requirements for research uses and disclosures. However, it has not addressed The HHS Office for Civil Rights monitors compliance with HIPAA requirements and issues various types of guidance to explain how those requirements apply under different circumstances. Input from experts and other relevant sources suggests that physicians and CDRs could benefit from guidance that provides a detailed explanation of what CDRs need to include in their business associate agreements with participating physicians and what activities would trigger the need for individual patient authorization or IRB approval of their data collection and analysis activities. attempts to implement a unique personal identifier as part of patients’ records to enable matching were abandoned due to concerns about its potential impact on patient privacy. Alternative methods exist for matching patient data without using a unique patient identifier, including algorithms that make probabilistic matches based on several discrete data elements. However, these approaches often fall short of matching data from multiple sources for all patients, due in part to variations in the algorithms themselves and the data elements they use for performing these matches. Input from experts and other relevant sources suggests that HHS could work on developing a standardized process for matching and linking patient data that does not require the use of a unique patient identifier, including a uniform algorithm and associated data specifications. HHS could then work with other health care entities to adopt this standardized approach across the spectrum of relevant data sources to better address the need of CDRs to link data from other sources in order to perform a more complete assessment of physician performance. Lack of Patient Cost Data: Because CDRs derive most of their data from patient medical records, they typically lack information about the cost of patient care needed to address questions about the efficiency of care. The most fundamental problem with obtaining cost data is that cost data are fragmented among the various public and private payers for health care, including private health insurers as well as Medicare and Medicaid. Even when CDRs limit their focus to the Medicare population, they have had to negotiate with CMS for access to Medicare claims data for each particular research project. To facilitate and encourage CDR analysis of the efficiency of physician care, input from experts and other relevant sources suggests that CMS could make its cost data for Medicare and Medicaid patients generally available to qualified CDRs. In addition, although HHS has less direct control over the cost data collected by private health insurers, some health insurers have begun to work with states, HHS, and others to assemble “all-payer” claims databases that combine public and private health care spending data. HHS could examine the potential for making these “all-payer” claims databases available to qualified CDRs. Difficulty of Funding CDRs: CDRs frequently have difficulty finding a sustainable flow of funding from the participating physicians to maintain the resource-intensive activities necessary for their work, including collecting and validating detailed clinical data, which requires highly trained staff. Under the new program, participation in a qualified CDR will entitle physicians to receive benefits of the same incentive payments and exemption from penalties provided to PQRS participants, which could help to encourage physicians to participate in CDRs and to fund their operations. However, experts report that participation in PQRS remains a cheaper and easier way to obtain those benefits. HHS is looking into expanding incentives for physician participation in CDRs by coordinating with additional federal programs, such as the EHR incentive program, as well as possible coordination with related nongovernmental activities, such as maintenance of certification requirements established by various boards of medical specialties. An alternative approach for providing additional funding to qualified CDRs raised at our expert meeting would be to share with them some of the financial benefits that the CDRs may generate for the Medicare program. Doing so could benefit CDRs that are successful in producing these benefits while promoting program savings for Medicare. For example, HHS could consider testing models of “shared savings” programs— possibly through CMS’s Center for Medicare and Medicaid Innovation— that would provide CDRs or their participating providers with a portion of any cost savings for the Medicare program that resulted from their activities. To do this, CMS would have to develop a credible methodology for determining the extent of savings that a qualified CDR’s activities had produced for Medicare. Need for Technical Assistance: The first CDRs established by medical specialty societies reported taking many years to work out how best to accomplish the complex technical tasks needed to get a new CDR up and running. These include procedures for deciding what measures to collect, appropriate and feasible data collection and submission processes, implementation of risk adjustment, provisions for maintaining data security and protecting patient privacy, and effective data validation procedures. Several CDRs that have followed have turned to those first CDRs for informal guidance, to learn from their experience. Input from experts suggests that HHS could consider creating or facilitating the development of a CDR resource center that would offer qualified CDRs, or CDRs seeking to become qualified, technical assistance in the initial phases of setting up a CDR. Such a CDR resource center could draw on expertise from existing CDRs or other relevant sources and could help new registries launch successfully and more quickly achieve an adequate level of physician participation. In recent years, some CDRs have developed different approaches to electronically capture data from a wide variety of health IT applications, particularly EHR systems. Input from experts and other relevant sources suggests that HHS could help CDRs overcome barriers that impede the electronic collection and transmission of clinical data by supporting standard setting and adjusting meaningful use requirements. Health IT applications, including EHRs, could offer CDRs substantial support in collecting and transmitting large amounts of detailed clinical data from participating physicians’ medical records. CDR officials report that, without such IT support, data collection is a time-consuming process where data must be manually abstracted from medical records by specially trained staff and formatted for transmission to the CDR, a process that includes training staff to synthesize information from patient charts and other records. These trained data abstractors must often make judgments on how to interpret certain information in the record to meet the CDR’s data specifications and definitions. For example, the word “pneumonia” may not appear in the medical record for all patients with the condition. Therefore, an abstractor may need to interpret the record’s data on patient encounters, chest x-ray results, or stethoscope breath sounds to determine whether a patient had pneumonia as defined by the CDR. In addition, most data collection is performed days or weeks after care is provided, rather than at the time of the care, which can substantially delay feedback to physicians. Input from experts and other relevant sources suggest that EHR systems, if appropriately designed and implemented, have the potential to greatly increase the efficiency of extracting data from patient records and transmitting these data to CDRs. The use of EHR systems across the country is growing; the proportion of office-based physicians using any type of EHR system increased from 51 percent in 2010 to 72 percent in 2012. If CDRs could receive and aggregate electronically extracted data from EHR systems, the need for manual abstraction by trained professional staff could be reduced or eliminated. Reducing the burden of manual data abstraction could have a number of long-term benefits, including reducing costs for physicians to participate in the CDR, reducing the amount of time a practice spends on CDR data collection activities, and increasing overall participation of physicians in CDRs. Health IT experts also note that automated data collection from EHR systems makes it possible for CDRs to provide physicians with more timely feedback on care they have recently provided, compared to manual data collection. In addition, EHR systems as well as other health IT applications have the potential to facilitate information sharing among CDRs and other potential users of health care quality and efficiency data, allowing for comparison across CDRs and providing a more comprehensive and long-term view of the outcomes of patient care. Some CDRs have adopted IT approaches that allow them to automatically extract at least some information from their participating members’ EHR systems into the CDR’s database. However, these approaches have some important limitations. For example, experts reported that some CDRs use a method called Retrieve Form for Data Capture (RFD), which informs the physician through a trigger in their EHR system when a patient may be eligible for inclusion in the CDR database. The RFD uses data from the EHR system to automatically prepopulate the CDR’s web-based data collection form. However, the RFD then requires that the physician interrupt work to enter the remaining information that was not automatically captured from the EHR. The RFD also works only with EHR systems from a few different vendors. Another example was described by an official from the ACC’s Pinnacle registry, which has implemented a more comprehensive system for electronically capturing data from a wider variety of EHR systems. Within each physician practice, system integration software is installed on the same server that hosts the EHR system. The software is designed, developed, and implemented to automatically extract data directly from the physician’s EHR system for transmission to the CDR database. After a period of testing and adjustment to adapt the software to the EHR system’s specific data structure, it can automatically capture 75 to 90 percent of the desired information. The ACC has determined that this electronic data collection results in higher levels of physician participation, and therefore is worth the tradeoff of doing without the portion of data that cannot be captured electronically. However, according to an ACC official, the system does not work with EHR systems produced by certain vendors, has been costly to implement, and may not be feasible for CDRs in other fields of medicine, where there is less consistent use of clinical terminology than in cardiology. Experts and other relevant sources indicate that variation in EHR systems on several key dimensions impairs CDRs’ ability to collect electronic data from participating physicians. EHRs can differ in which data elements they collect.systems collect more information on some topics than others, because physicians in different specialties have different needs and interests. EHR systems can differ in how they store data. In order to automatically extract data from the EHR, CDRs must develop methods for converting the data in each EHR to a format that the CDR’s IT system can accept and accurately interpret. For example, an ACC official told us that one reason why their system has been costly to implement and does not work with EHR systems from certain vendors is because of differences in how data are stored in different EHR systems. Finally, even if EHR systems collect the same basic content and use compatible storage methods, their data elements may be specified or defined differently. For example, an EHR may identify a smoker based on whether a person smoked any number of cigarettes in the last year, while another may count as a smoker anyone who has smoked at least 100 cigarettes in the past and still currently smokes. While both of these definitions may serve various purposes, the information collected from each EHR on smoking would not be fully comparable. These variations in EHR data content, storage, and specifications can impact a CDR’s ability to extract data electronically from physician EHR systems. In order to assess physician performance, CDRs have to collect all the data elements needed for their performance measures and ensure that those data elements are consistent with the CDR’s data specifications. Consequently, CDRs cannot take full advantage of EHR systems to facilitate data collection and transmission unless they can overcome these variations in content, storage, and specifications across existing EHR systems. One way to reduce variation across health IT applications, including EHR systems, and thereby facilitate collection and transmission of clinical data, is to develop and implement relevant health IT standards. According to experts, CDRs could benefit from health IT standards that reduce variation across EHR systems on the data elements needed for the measures used by CDRs. Where such standards are in place, they are available for vendors to use in designing and implementing EHR systems. As a result, different vendors would be more likely to develop EHR systems with consistent clinical data, in terms of their content and specification. Such consistency could make it easier for CDRs to collect these data from different EHR systems, as long as the standards aligned with the CDR’s own data specifications and needs. However, standards may not always align with CDR needs. For example, one CDR official reported that the existing health IT standard for cancer staging does not provide the level of detail needed by the oncology CDR, Quality Oncology Practice Initiative, to assess physician compliance with treatment guidelines targeted by the CDR. Several independent organizations play a role in setting the health IT standards that apply to physician EHR systems. They include international standards setting groups, each of which creates detailed coding systems, such as SNOMED CT and LOINC, designed to provide a standard way to electronically record one or more categories of clinical information. According to agency officials, while HHS interacts with these groups and may be able to influence the development of new or revised health IT standards, the process for doing so can be long, taking as long as 2 years. Therefore, at any given time, the extent of existing health IT standards largely constrains what developers of EHR systems can do to implement standardized data elements. A second major factor that experts report affects the design and implementation of EHR systems used by physicians is the meaningful use requirements established by HHS for its EHR incentive programs. HHS establishes two sets of requirements for the EHR incentive programs that potentially affect CDRs: (1) a list of specific clinical quality measures (CQM) that physicians are required to collect using EHR-collected data elements, and (2) certification criteria that specify certain capabilities that EHR systems are required to demonstrate, including the ability to collect the data needed for physicians to report the specified CQMs. To receive an incentive payment, physicians must demonstrate, among other things, that they have used a certified EHR system to collect data for a minimum number of the specified CQMs. Through its setting of these meaningful use requirements, HHS could influence the extent to which EHR systems are designed and implemented to collect data needed by CDRs to assess physician performance. According to IT experts at our expert meeting, EHR vendors place a high priority on developing EHR systems that are able to collect CQMs prescribed by meaningful use requirements, without which the systems would not qualify for EHR incentive payments. The current set of 64 CQMs focuses predominantly on primary care and generally does not include measures relevant to CDRs, many of which focus on assessing specialty care. HHS has stated its intention to consider revisions to the meaningful use requirements under Stage 3 of the EHR incentive program implementation, scheduled to take effect in 2016. These revisions would give qualified CDRs greater flexibility in meeting the EHR programs’ quality reporting requirements. By also including the data needed by CDRs in its revised meaningful use requirements, HHS could increase the motivation for vendors to include the capacity to collect data elements for measures relevant to CDRs in their EHR systems. Qualified CDRs have the potential to improve the quality and efficiency of care for Medicare beneficiaries by encouraging physicians to submit extensive, standardized data to CDRs, enabling the CDRs to provide feedback to physicians on their performance relative to that of their peers. Studies show that CDRs have great potential to improve quality, and to a lesser extent efficiency, but often that potential is not realized. While implementation of the program is just getting under way and HHS plans to have its program requirements and structure evolve over time, a key question is the extent to which that evolutionary process focuses on harnessing the potential of CDRs to promote quality and efficiency. The extent to which HHS’s new program can help CDRs realize their potential to improve quality and efficiency will depend in large part on the content and oversight of the requirements that HHS sets for qualified CDRs and the support that HHS provides. To date, HHS plans have focused on largely procedural requirements for CDRs that collectively would do little to base qualification of CDRs on their potential to affect quality and efficiency or hold them accountable for achieving improvements in those domains. Our analysis identified certain key requirements that HHS could adopt that would make it substantially more likely that qualified CDRs actually would improve quality and efficiency. Some of these key requirements are more important than others to have in place as the program is implemented. From the beginning, the effectiveness of CDRs will depend on their selecting measures that focus the CDRs’ assessment and performance improvement activities on the specific opportunities for improvement that exist for their particular target populations. At the same time, CDRs can also collect a limited core data set that contributes to achieving national quality and efficiency objectives. The credibility of those data depends on CDRs establishing from the start systematic and rigorous processes to validate their accuracy and completeness. HHS can most clearly ensure that each qualified CDR focus on improvements in quality and efficiency by requiring that each CDR demonstrate improvements in key measures of quality and efficiency for its target population. Effective monitoring of these requirements will depend on applying expert judgment that can take account of the variation across CDRs in their target opportunities for improvement. HHS can also enhance the effect of qualified CDRs on quality and efficiency by taking steps to reduce barriers to their development and, in particular, taking account of CDRs in its ongoing efforts to promote health IT. Certain steps would be particularly useful as the program gets under way, including clarifying the application of HIPAA privacy requirements to physicians participating in qualified CDRs, addressing the lack of access to multipayer cost data, expanding potential sources of funding to support sustained CDR operations, and providing technical assistance to newly established CDRs. Meanwhile, efforts by some CDRs to adapt health IT to make their data collection less costly and more timely have run into significant barriers related both to gaps in existing health IT standards and to the failure of many current EHRs to apply existing standards to collect data needed by CDRs in a structured format. Changes to EHR capabilities that would enable them to collect such data within existing standards are clearly feasible, but are not high priorities for providers and IT vendors because they are not included in the current set of meaningful use requirements for the EHR incentive program. As HHS determines what the next cycle of meaningful use requirements should comprise, identifying data elements for measures commonly needed by CDRs and including them in meaningful use requirements could substantially assist qualified CDRs in adapting health IT to make data collection less costly and more timely. To help ensure that qualified CDRs promote improved quality and efficiency of physician care for Medicare beneficiaries, we recommend that the Secretary of Health and Human Services take the following five actions: Direct CMS to establish key requirements for qualified CDRs that focus on improving quality and efficiency. These requirements could include, for example, having CDRs (1) identify key areas of opportunity to improve quality and efficiency for their target populations and collect additional measures designed to address them, (2) collect a core set of measures established by CMS, and (3) demonstrate that their processes for auditing the accuracy and completeness of the data they collect are systematic and rigorous. Direct CMS to establish a requirement for qualified CDRs to demonstrate improvement on key measures of quality and efficiency for their target populations. Direct CMS to establish a process for monitoring compliance with requirements for qualified CDRs that draws on relevant expert judgment. This process should assess CDR performance on each requirement in a way that takes into account the varying circumstances of CDRs and their available opportunities to promote quality and efficiency improvement for their target populations. Determine and implement actions to reduce barriers to the development of qualified CDRs, such as (1) developing guidance that clarifies HIPAA requirements to promote participation in qualified CDRs; (2) working with private sector entities to make relevant multipayer cost data available to qualified CDRs; (3) testing one or more models of shared savings between Medicare and qualified CDRs that achieve reduced Medicare expenditures with improved quality of care, and (4) providing technical assistance to qualified CDRs. Determine key data elements needed by qualified CDRs—such as those relevant for a required core set of measures—and direct ONC and CMS to include these data elements, if feasible, in the requirements for certification of EHRs under the EHR incentive programs. We provided a draft of this report to HHS for review, and HHS provided written comments, which are reprinted in appendix II. In its comments, HHS concurred with our recommendations and stated its intention to apply the experience it gains in implementing the qualified CDR program to facilitate changes that lead to improved quality and efficiency. For example, HHS stated that it saw value in providing greater specificity in the expectations it sets for qualified CDRs, in particular with respect to having them demonstrate improvement in quality and efficiency, once HHS has sufficient experience with the program to establish a baseline against which to assess their performance. HHS also stated its intention to establish a process to monitor the qualified CDR program that would draw on relevant and appropriate expert judgment and to do what it could to reduce barriers to the development of qualified CDRs. In addition, HHS agreed to have CMS and ONC work together to consider the inclusion of key data elements for qualified CDRs as they develop enhanced health IT criteria for the next stage of the EHR incentive programs. Meanwhile, HHS noted several other efforts that it currently has under way to improve health IT systems in general, which can also provide assistance to qualified CDRs attempting to use health IT to facilitate their operations. While HHS concurred with each of our recommendations, its comments also noted some challenges that it expects to face. For example, HHS stated that it will examine the possibility of establishing a core measure set for qualified CDRs, but it observed that doing so could prove difficult given the number of different clinical specialties on which qualified CDRs may focus. As noted in the draft report, a minimum set of core measures—even if small—could help CDRs to promote national-level quality improvement objectives such as improving care coordination by permitting the sharing and aggregating of the data across CDRs and other sources of quality data. HHS also provided us with technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, the National Coordinator for Health Information Technology, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at kohnl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Pacific Business Group on Health Network for Regional Health Improvement National Committee for Quality Assurance Siemens Medical Solutions, Inc. In addition to the contact named above, Will Simerl, Assistant Director; Emily Binek; Monica Perez-Nelson; Eric Peterson; and Roseanne Price made key contributions to this report.
The American Taxpayer Relief Act of 2012 instructed HHS to establish a new program to designate "qualified" CDRs--entities that would work with physicians treating Medicare patients to collect clinical information and use it to improve the quality and efficiency of care. The act also mandated GAO to report on the potential for CDRs to improve quality and efficiency. This report examines (1) improvements demonstrated by CDRs in quality and efficiency of care, (2) HHS's plans for requirements and oversight for qualified CDRs, (3) actions HHS could take to facilitate the development of qualified CDRs, and (4) actions HHS could take to facilitate CDRs' use of health IT. GAO reviewed relevant studies and documents, and interviewed HHS and CDR officials. GAO also convened an expert meeting with the assistance of the Institute of Medicine and synthesized input from experts and other sources to assess the likely effect of potential program requirements, approaches to oversight, and other actions HHS could take. Clinical data registries (CDR) have demonstrated a particular strength in assessing physician performance through their capacity to track and interpret trends in health care quality over time. Studies examining results reported by several long-established CDRs demonstrate the utility of CDR data sets for analyzing trends in both outcomes and treatments. CDR efforts to improve outcomes typically involve a combination of performance improvement activities including feedback reports to participating physicians, benchmarking physician performance relative to that of their peers, and related educational activities designed to stimulate changes in clinical practice. Studies GAO reviewed provided less insight on ways to improve the efficiency of care. The Department of Health and Human Services' (HHS) plans for implementing the qualified CDR program offer little specificity and provide substantial leeway for CDRs seeking to become qualified. According to officials, HHS plans to have its program requirements and structure evolve over time, and a key question is the extent to which this evolutionary process will focus on harnessing the potential of CDRs to promote quality and efficiency. GAO's synthesis of input from experts and from other relevant sources identified several key requirements that would make it more likely that qualified CDRs promote improved quality and efficiency, which HHS's current plans for the program would do little to address. These requirements include directing CDRs to focus data collection on performance measures that address the key opportunities for improvement in quality and efficiency for each CDR's target population and requiring CDRs to demonstrate improvement over time on the quality and efficiency measures that they collect. In addition, effective oversight of these requirements depends on expert judgment to take account of variation among CDRs in their circumstances and opportunities for improvement. Experts indicated that HHS can also help qualified CDRs to improve the quality and efficiency of care provided to Medicare patients by taking actions that could reduce potential barriers to the development of qualified CDRs, such as concerns about complying with privacy regulations and the difficulty of funding CDRs. GAO's synthesis of input from experts and from other relevant sources identified several specific actions that HHS could take. They include developing guidance to clarify federal privacy requirements for physicians participating in CDRs and testing one or more models of shared savings between Medicare and qualified CDRs that achieve reduced Medicare expenditures with improved quality of care. In addition, input from experts and other relevant sources suggests that HHS can take actions to facilitate CDRs' use of health information technology (IT). According to CDR officials, some CDRs have developed approaches to electronically capture and transmit large amounts of detailed clinical data from a wide variety of electronic health record (EHR) systems. CDRs could benefit from new IT standard setting that focuses on data elements needed for the measures that CDRs collect. One way HHS can influence whether EHR vendors use IT standards to design EHR systems that are compatible with CDR needs is through its setting of meaningful use requirements in its EHR incentive programs. GAO recommends that HHS (1) focus its requirements for qualified CDRs on improving quality and efficiency, (2) require qualified CDRs to demonstrate improvement in quality and efficiency, (3) draw on expert judgment to monitor qualified CDRs, (4) reduce barriers to the development of qualified CDRs, and (5) include, if feasible, key data elements needed by qualified CDRs in its requirements under the EHR incentive programs. HHS agreed with GAO's recommendations.
The United States controls high performance computers and related components (for example, microprocessors) through the Export Administration Act of 1979 and the implementing Export Administration Regulations. The act authorizes Commerce to require firms to obtain licenses for the export of sensitive items that may be a national security or foreign policy concern. The Departments of Defense, Energy, and State assist Commerce, which administers the act, by reviewing export applications and supporting Commerce in its reviews of export control policy. Since 1993, the President has revised U.S. export control levels for high performance computers seven times, including the revisions announced in January 2002. These revisions have resulted in a nearly thousandfold increase in the export control threshold over the 8-year period; most of these changes have occurred over the last 2 years (see fig. 1). The latest effort to revise the threshold was initiated in response to a letter from the Computer Coalition for Responsible Exports. Beginning in 1996, the executive branch organized countries into four computer “tiers,” with each tier above tier 1 representing a successively higher level of concern related to U.S. national security interests. Current U.S. export control policy places no license requirements on tier-1 or tier-2 countries, primarily those in Western Europe, Japan, Asia, Africa, Latin America, and Central and Eastern Europe. Exports of computers above a specific performance level to tier-3 countries such as China, India, Israel, Pakistan, and Russia require a license. Exports of high performance computers to tier-4 countries such as Iran, Iraq, and North Korea are essentially prohibited. To help inform congressional decision makers about changes in U.S. export controls on computers, the National Defense Authorization Act of 1998 requires that the President report to Congress the justification for changing the control threshold for exports of high performance computers to certain sensitive countries. The report must, at a minimum, (1) address the extent to which high performance computers with capabilities between the established level and the newly proposed level of performance are available from foreign countries, (2) address all potential uses of military significance to which high performance computers between the established level and the newly proposed level could be applied, and (3) assess the impact of such uses on U.S. national security interests. In addition, section 1402 of the National Defense Authorization Act of 2000 requires the President to annually assess the cumulative impact of licensed transfers of military-sensitive technologies to countries and entities of concern and possible countermeasures that may be necessary to overcome the use of such technologies. Section 1406 requires the President, in consultation with the Secretaries of Defense and Energy, to conduct a comprehensive review of the national security implications of exporting high performance computers to China with annual updates through 2004. In January 2000, the President delegated the responsibility for producing these reports to the Secretaries of Defense and Energy. As required by law, we reviewed prior justifications for changing the export control thresholds on high performance computers. We found that the changes were not adequately justified. For example, previous reports failed to address all uses of military significance to which high performance computers could be applied at the new thresholds, or the impact of such uses on national security, as required by law. In response to these deficiencies, we recommended that the Secretary of Defense report on the national security threat and proliferation impact of U.S. exports of high performance computers to countries of concern. The Department of Commerce stated that the December 2001 decision to raise the control threshold for high performance computer exports was based on thorough analysis. However, we found the justification did not adequately meet the three criteria required by law. First, the report stated that computers based on Intel Corporation’s Itanium processor and capable of performing at the 190,000 MTOPS level would be widely available in early 2002. This assertion was not based on any formal analyses and has proven to be inaccurate. Second, the report provided little analysis of all the potential military uses of these computers. Third, the report did not assess the impact of the uses of these computers on U.S. national security. Although the report asserts that high performance computers would be of limited value to countries of concern not having the demonstrated knowledge and experience in using these computers, the report did not discuss the national security implications of exporting computers to countries of concern, such as China and Russia, that have a demonstrated ability to use them. Further, several laws and a Defense Department directive have mandated other studies that could be used to better understand the national security implications of the export of high performance computers and other technologies; however, the Department of Defense has not completed such studies. The December 2001 report inadequately addressed the first criterion of the National Defense Authorization Act of 1998 in its discussion of the extent to which high performance computers with capabilities between the established level and proposed level of performance are available from other countries. The executive branch’s report stated that the decision to raise the licensing threshold level to 190,000 MTOPS was based on the wide availability by early 2002 of new computer servers containing 32 Intel Corporation Itanium processors. Such servers approach a composite theoretical performance of 190,000 MTOPS. Contrary to assertions made in the report, however, Itanium-based computers with performance capabilities in the 190,000 MTOPS range are not widely available. We found that the report’s finding of availability was not based on an independent analysis but rather on information provided by industry. According to Defense officials responsible for producing the report, industry representatives told them that (1) the market would be flooded with 32- way, Itanium-based servers in early 2002, (2) the People’s Republic of China is the long-term market of importance, and (3) U.S. industry is concerned that, if the threshold is not raised, foreign competitors will capture the market. Although not required by law, Commerce could have independently verified industry’s assertions as to the availability of the servers by conducting foreign availability assessments. Foreign availability assessments identify foreign sources of items subject to U.S. national security export controls, such as high performance computers, and are the principal mechanism recognized in the U.S. Export Administration Regulations for determining the availability of controlled items. These assessments determine whether items of comparable quality are available in quantities from non-U.S. sources that would render U.S. export controls on the items ineffective. Commerce officials stated that no foreign availability study was conducted because industry had made its case informally. Instead of conducting a study to establish that these servers would be widely available by early 2002, Commerce stated that it conducted interagency meetings and discussions with industry as well as an analysis of the worldwide availability of high performance computers. Commerce stated that it also reviewed the Internet sites of the computer manufacturers mentioned in the report. In commenting on a draft of our report, the Department of Commerce asserted that it completed a market analysis of the worldwide availability of high performance microprocessors and computer clustering capabilities, and held discussions with other executive branch agencies and foreign governments. However, the President’s report did not cite or include this market analysis nor did the department provide additional information to document this completed analysis in response to our request. We reviewed the documentation that Commerce obtained from the Internet and other sources and found little additional evidence about the availability of 32-way, Itanium-based servers beyond the information contained in the Computer Coalition for Responsible Exports’ August 2001 letter requesting a change in the export control threshold. The information provided did not indicate that the 10 companies listed in the President’s report planned to introduce 32-way servers or that the servers would be widely available in early 2002. We also contacted the companies listed in the report and found that, as of May 2002, only one of the companies—Unisys Corporation—was producing a 32-way, Itanium-based server (see table 1). Information obtained from the companies listed in the President’s report contradicts the assertion that 32-way, Itanium-based servers will be widely available in early 2002. Representatives we interviewed stated that their companies would not introduce these servers in 2002 or had no plans to manufacture these servers due to the lack of software and a market for such powerful servers. An official from a leading information technology market research firm stated that Itanium-based technology is far too new to allow a reasonable determination of its impact on the server market. Furthermore, according to the research firm’s information, no 32-way, Itanium-based servers were shipped in the first quarter of 2002. Finally, the report noted that a significant market exists for high-end servers of up to 32 processors. However, Commerce data indicate that the market for computers with performance capabilities in the 190,000 MTOPS range in countries of concern is small and that the loss of sales in these countries should not materially affect U.S. manufacturers. In 2001, Commerce received 16 export license applications for computers with performance capabilities at or above 85,000 MTOPS; all but one was approved. Six of the approved applications were for sales to China. Moreover, Japan—the other leading exporter of high performance computers—did not sell any of these systems to China, Russia, or India in 2001, according to the Department of Defense. As in previous reports used to justify changes in the control threshold, the December 2001 report did not meet the second criterion of the National Defense Authorization Act of 1998: to address all potential uses of military significance to which computers with performance capabilities between the old control threshold and the new threshold could be applied. The report stated that the U.S. government uses computers in virtually all military and national security applications, including the design, development, and production of weapon systems, military operations, cryptoanalysis, and nuclear weapons design and simulation. Defense officials to whom we spoke stated that Defense does not maintain an inventory of all U.S. national security-related computer applications, that the value of such a list is questionable, and that it may be impossible to construct such a list. The President’s report provides little information about which military applications can be run on computers with capabilities between the old and new threshold. The report pointed out that the majority of U.S. military and national security applications are run on computers below 190,000 MTOPS. Using information provided by Defense, we found that computers operating at or below 190,000 MTOPS meet 98 percent of Defense’s military computational requirements. Defense officials responsible for preparing the report said that the level of control selected—190,000 MTOPS—was driven by the market and what the administration believes it can control, not by the military and national security applications that could be run on high performance computers. The President’s report did not discuss the impact on U.S. national security of countries such as Russia and China obtaining high performance computing power up to the new control threshold, as required by law. Such a national security assessment has been a long-term, executive branch requirement. For example, section 1402 of the National Defense Authorization Act of 2000 requires the President to annually assess the cumulative impact of licensed transfers of military-sensitive technology to countries and entities of concern and to identify possible countermeasures that may be necessary to overcome the use of such technologies. In addition, section 1406 of the act requires assessments of the national security implications of exporting high performance computers to China with annual updates through 2004. In addition, a 1985 Department of Defense directive requires annual assessments of the total effects of technology transfers. We found that Defense had not completed the studies required by the law or its directive. Moreover, Defense has not yet implemented our prior recommendation to report on the national security threat and proliferation impact of U.S. exports of high performance computers to countries of concern. Although the Departments of Defense and Commerce stated that they are already engaged in reviews of similar issues, the agencies could not furnish plans or other documentation on how they are implementing our recommendation. Instead of addressing the national security implications associated with the export of high performance computers, the President’s report simply stated that high performance computers would be of little or no value to countries of concern not having the requisite knowledge and experience in using these computers to advance their military capabilities. However, the report did not discuss the usefulness of these computers to countries such as China and Russia that have demonstrated the ability to use high performance computers. The report’s assertion that countries of concern will not benefit from the acquisition of high performance computers also contradicts statements made in other reports published by the executive branch and statements made by Defense officials responsible for producing the President’s report, as indicated in the following examples. Reports published in 2000 that were used to justify previous increases in the export control threshold for high performance computers stated that Russia and China have the expertise necessary to use these computers for national security applications such as the construction of submarines, advanced aircraft, composite materials, or a variety of other devices. A 2001 report by the Department of Energy’s National Nuclear Security Administration concluded that the availability of overall computing power to a nuclear weapons design program is critical. Acquisition of computers with higher performance levels allows a nuclear weapons program to conduct studies faster and enables studies that cannot be conducted on systems of lower performance, thus shortening the time for design and development to full-scale testing. The report further concludes that computers with an effective performance of 10,000 MTOPS or greater would be of significant use to China’s designers in examining likely gaps in their nuclear weapons programs. A 2001 executive branch assessment concluded that the increased use of high performance computers in the weapons of mass destruction programs of countries of concern could severely complicate U.S. efforts to monitor and assess such programs. The use of these computers can reduce and even eliminate many traditional observable weapons production activities such as large manufacturing operations and live weapons tests. According to the Defense officials responsible for preparing the December 2001 report, the level of computing power used to solve a particular problem is based on the level of computing power available. If more powerful computers are available, they are used. The greater the power of the available computer, the faster the problem can be solved. Consequently, the computers exported under the new threshold will allow countries of concern to solve more quickly more complex problems in weapons systems design. Although not required by law, the President’s December 2001 report did not address several key issues related to the decision to raise the threshold. These issues include the ability of countries of concern to construct high performance computers on their own, U.S. government difficulties in monitoring the end-use of computers exported to countries of concern, the use of MTOPS as a measure of individual computer performance, and the impact of establishing a new licensing threshold outside the Wassenaar Arrangement process. The report did not acknowledge the difficulty that some countries of concern have encountered in clustering smaller machines together to achieve greater computing power. However, as we have reported before, it may be more difficult to operate custom-built clustered systems than to build them, according to experts. For example, without vendor-supplied software to automate key functions on a clustered system, everything must be done manually, making computing labor intensive and less reliable than if it were performed on a vendor-manufactured system. With the higher thresholds, countries of concern will not have to rely on more inefficient clustered systems to obtain greater computing capabilities. The report did not address the difficulty that the U.S. government has had in effectively monitoring the high performance computers that are exported to countries of concern. Monitoring exported equipment for proper use is a key element of the U.S. export licensing process. Approved export licenses for high performance computers typically stipulate conditions, such as where the computer must be located and how it should be used. The conditions are designed to deter the end user from using the computer inappropriately or from transferring the computer to another location. Monitoring of these conditions is to be accomplished through required end-use checks conducted by U.S. government personnel. In our prior report, we found that U.S. government personnel in China tasked with this job have been unable to conduct many checks. In testimony before the U.S.-China Security Review Commission on January 17, 2002, Commerce’s Assistant Secretary for Export Enforcement stated that the Chinese government dictates the schedule for conducting end-use checks. As a result, more than 700 outstanding checks remain to be completed, according to Commerce. The inadequacies of the President’s report are compounded by the continued use of MTOPS to determine the performance capabilities of computers. Although industry and government no longer consider MTOPS a valid measure of computer performance, the executive branch continues to use it. In our 2000 report on high performance computers, we recommended that executive branch agencies comprehensively assess ways to address the shortcomings of computer export controls, including the development of new performance measures. The President’s December 2001 report stated that the executive branch is conducting a comprehensive review of export controls on computer hardware. According to the report, this interagency review will, among other things, attempt to identify a controllable class of high-end computer systems of greater military sensitivity and alternative metrics for controlling such systems. However, Defense officials stated that the study has no deadline and no formal terms of reference. The report did not acknowledge the multilateral process established under the Wassenaar Arrangement—a forum of 33 countries established in 1996 to reach multilateral agreements on which dual-use goods merit special scrutiny and reporting. Changes to control thresholds on dual- use goods are coordinated through the Wassenaar Arrangement. The arrangement uses a consensus-based approach to establish control thresholds on these goods. The United States unilaterally raised the MTOPS licensing threshold to 190,000 without first obtaining the consensus of other Wassenaar Arrangement members. Due to actions taken by the United States, the U.S. licensing threshold is now 190,000 MTOPS, while the control thresholds of other Wassenaar member states remain at 28,000. Consequently, U.S. exporters have a competitive advantage over their international competitors because U.S. exporters are not required to obtain an export license for a wider range of computers. According to State Department officials, the unilateral U.S. action may complicate future efforts to reach consensus in the Wassenaar forum on other important export control issues. The report justifying the decision to decontrol high performance computers was not based on a thorough analysis and did not fully address the requirements of the National Defense Authorization Act of 1998. Since the report’s conclusions are based on inaccurate information provided by the computer industry and an inadequate assessment of national security issues, the decision to raise the export control threshold is analytically weak and appears to be premature, given market conditions. By providing greater access to more powerful computers through the removal of any export-licensing requirement, the United States could allow countries of concern to pursue computer applications having military uses with a greater degree of rigor and reliability. A more thorough analysis of the foreign availability and the national security impact of transferring technology to countries of concern would have provided a better analytical basis for making changes in the control threshold. Given the level of high performance computing power that the United States approves for export, such studies of the cumulative effect of computer and related technology exports will be increasingly important in determining the impact of such exports on U.S. national security and in making future decisions about adjusting export control thresholds. In our draft report, we recommended that the Departments of Commerce, Defense, and State comply with existing statutes and complete a thorough assessment of the foreign availability, military significance, and the national security impact of changes to high performance computer controls. Prior GAO reports have made similar recommendations. Since the departments have not responded to our earlier recommendations on this issue or clearly indicated whether they agreed with the recommendations made in our draft report, we have included a Matter for Congressional Consideration. To help ensure that a thorough assessment of these issues is completed, Congress may wish to consider requiring that the executive branch fully comply with the provisions of the National Defense Authorization Acts of 1998 and 2000 before the executive branch alters or eliminates the export control thresholds for high performance computers. We received written comments on a draft of this report from the Departments of Commerce, Defense, and State, which are reprinted in appendixes I, II, and III. The Commerce Department disagreed with our findings and conclusions and said that the executive branch conducted a thorough review of U.S. export controls on high performance computers prior to the President’s January 2002 decision to raise the licensing thresholds. Commerce stated that this review included significant input from all relevant agencies, consultations with other Wassenaar Arrangement partners, as well as an analysis of the worldwide availability of high performance computers and computer clustering. Commerce also said the United States continues to seek a means to control computers of the greatest strategic importance. The Department of Defense said it is conducting a study of computer export controls consistent with our recommendations and the requirements of law. The Department of State said it agreed that several shortcomings exist in the executive branch’s justification to raise the licensing thresholds for high performance computers. While agreeing that there were some gaps in the study, State said it did not believe that these shortcomings invalidated a key finding that high performance computers can no longer be controlled effectively, due to advances in clustering computers together to achieve higher capabilities. We have added information to the report to more fully describe the information that Commerce gathered from industry. However, we disagree that the administration conducted a thorough review of U.S. export controls prior to the President’s January 2002 decision to raise the licensing thresholds. As noted in our report, the President’s justification focused on only one of three elements required by law—the availability of high performance computers. Additionally, the availability assessment was not adequate since only 1 of 10 companies capable of producing high performance computers planned to market such computers in 2002. As noted in Commerce Department data, the current market for computers at the 190,000 MTOOPS level is relatively small and is not developing as quickly as anticipated. Accordingly, the disparity between market conditions and industry’s assertions about the widespread availability of such computers should have prompted Commerce to conduct an independent foreign availability assessment as allowed by the Export Administration Regulations. However, Commerce did not conduct this important analysis because senior Commerce officials informed GAO that the department did not have the resources to complete such assessments. The President’s report did not fully address the two remaining elements required by law—identifying all potential uses of military significance and the national security implications of high performance computer exports. As noted in our report, Defense Department information shows that computers operating at or below 190,000 MTOPS meet 98 percent of Defense’s military computational requirements. Therefore, the President’s justification to raise the MTOPS licensing threshold should have included an assessment of the effects on national security. The State Department’s comments clearly articulated the executive branch’s position on high performance computers—“high performance computers can no longer be controlled effectively” because high performance computing capacity is widely available. While our report found State’s assertion on availability is not supported by current market conditions, State’s comments demonstrate that preceived market conditions and related trends in computer clustering served as the primary basis for the decision to raise the control threshold for high performance computers. Regarding State’s comments on computer clustering, we note that State’s position contrasts with an October 2001 Department of Defense analysis that concluded that a clustered system does not provide comparable capabilities as a stand-alone high performance computer. The State and Commerce Departments cited no analysis as to how these powerful computers could enhance the military capabilities of countries of concern or affect U.S. national security interests. These important analyses are required by law but not addressed in the President’s report. To assess the President’s justifications for raising the export control threshold from 85,000 MTOPS to 190,000 MTOPS, we reviewed the statutory requirements related to the President’s justification and the regulations that pertain to the export of high performance computers. Further, we reviewed documentation used as the basis for the report’s assertions. The documentation included the letter and associated attachments addressed to Commerce from the Computer Coalition for Responsible Exports that prompted the change in the threshold. We also examined information available on the Internet about the computer server products offered by the 10 manufacturers mentioned in the President’s report and contacted the manufacturers to obtain additional information. The information obtained from the manufacturers was supplemented with information obtained from a leading information technology industry research organization, including reports pertaining to the availability of Intel Itanium-based servers. Finally, we interviewed Commerce, Defense, and State officials responsible for producing the report. The National Security Council, which plays a key role in coordinating the interagency process for changing export controls on high performance computers, declined to discuss the President’s report with us. We performed our work from February 2002 through July 2002 in accordance with generally accepted government auditing standards. We are sending this report to interested congressional committees and the Secretaries of Commerce, Defense, and State. We will also make copies available to other interested parties on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8979 if you or your staff have any questions concerning this report. Another GAO contact and staff acknowledgments are listed in appendix IV. The following are GAO’s comments on the Department of Commerce’s letter dated July 16, 2002. 1. We disagree that the Commerce Department conducted a thorough review of U.S. export controls prior to the President’s January 2002 decision to raise the licensing thresholds. As noted in our report, the President’s justification focused on only one of three elements required by law—the availability of high performance computers. The justification did not adequately identify uses of military significance or the national security impact of changing the thresholds. 2. We agree that this raises a legal issue, which we mentioned in our testimony on high performance computers on March 15, 2001. Once a new measure is decided upon, the executive branch could work with Congress to allow use of other measures. Section 221 of H.R. 2581, would repeal the National Defense Authorization Act provisions dealing with export controls on high performance computers. These controls are expressed in MTOPS. 3. We agree that countries of concern can cluster or link together lower performance computers to achieve higher computing capabilities. However, clustering still comes at a cost in terms of speed and difficulties in operating the clustered systems. Raising the control threshold to 190,000 MTOPS effectively eliminates these costs and allows countries of concern to easily purchase high performance computers. 4. Defense Department officials stated that high performance computers performing at or below 190,000 MTOPS meet 98 percent of the Department of Defense’s computational requirements. Therefore, it is difficult to understand Commerce’s assertion that the United States continues to seek a means to control computers of the greatest strategic importance. 5. This comment acknowledges that Commerce used market conditions as the sole criterion for changing the control thresholds for high performance computers. The act also requires an assessment of how these powerful computers could enhance the military capabilities of countries of concern or affect U.S. national security interests. These topics were not addressed in the President’s report. 6. We disagree. The practical effect of raising the U.S. license exception level to 190,000 MTOPS is to raise the control threshold to this level since computers below this level (190,000 MTOPS) do not require an export license. Further, according to Commerce officials, not all Wassenaar members have license exception provisions in their regulations. Consequently, a disparity exists between U.S. licensing requirements and the control thresholds used by other Wassenaar member countries, as we noted in our report. Finally, according to State Department officials and official documents we reviewed, other Wassenaar members complained that the United States unilaterally increased its export control threshold by raising the licensing exception level to 190,000 MTOPS. 7. Commerce and Defense officials responsible for preparing the President’s December 2001 report confirmed that the effort to formally change the licensing threshold and prepare the justification was prompted by the letter from industry. 8. Commerce data indicate that there are more than 700 outstanding post- shipment verifications that have not been conducted in China. 9. When implemented, the Department of Commerce’s effort to “red flag” persons for which it has not been able to conduct prelicense checks or post-shipment verifications may prove to be a useful first step in improving its ability to counter problems associated with conducting checks and verifications. The following are GAO’s comments on the letter from the Department of State dated July 19, 2002. 1. We are encouraged that executive branch agencies continue to explore alternatives to the current MTOPS metric. We believe the results of this analysis should be shared with Congress. 2. We agree that some countries subject to the existing controls may be among the most adept at developing clustering software and technology. However, this point was not included in the President’s report. The report simply stated that the impact of clustering will be assessed in the course of the executive branch’s review of computer export controls. 3. While computer clustering complicates efforts to maintain effective export controls, this point was not used as the basis for raising the export control thresholds for high performance computers. Also, the executive branch continues to debate the extent that clustering has rendered the current export control system ineffective. An October 2001 Defense study found that clustered systems do not match the overall performance capabilities of the stand-alone systems supplied by U.S. vendors. This study concluded that foreign country’s use of clustered systems should not be used as a justification for decontrolling all classes of high performance computers. In addition to the individual named above, David M. Bruno, Claude T. Adrien, and Lynn Cothern made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
For national security and foreign policy reasons, U.S. export control policy seeks to balance economic interests in promoting high technology exports with national security interests to maintain a military advantage in high performance computers over potential adversaries. In January 2002, the President announced that the control threshold--above which computers exported to countries such as China, India, and Russia--would increase from 85,000 millions of theoretical operations per second (MTOPS) to 190,000 MTOPS. The report justifying the changes in control thresholds for high performance computers focused on the availability of such computers. However, the justification did not fully address the requirements of the National Defense Authorization Act of 1998. The December 2001 report did not address several key issues related to the decision to raise the threshold: (1) the unrestricted export of computers with performance capabilities between the old and new thresholds will allow countries of concern to obtain computers that they have had difficulty constructing on their own, (2) the United States is unable to monitor the end-uses of many of the computers it exports, and (3) the report does not acknowledge the multilateral process used to make prior changes in high performance computer thresholds.
While the full extent of MTBE contamination is unknown, most states reported in the EPA-sponsored survey that they are finding the contaminant in groundwater from releases at tank sites, and some are beginning to find it in their drinking water sources. The extent to which the contaminant poses a health risk is uncertain, however, in part because EPA does not yet have the data necessary to determine MTBE’s health effects. Detecting MTBE from a release typically does not influence the type of cleanup method selected, but could increase the time and cost of the cleanup, according to a number of states. Portions of 17 states and the District of Columbia currently use gasoline potentially containing the additive MTBE to limit air pollution (see figure 1). However, MTBE is being detected nationwide because, among other things, it had been used as an octane enhancer in gasoline in the past and because the pipes and trucks used to carry gasoline throughout the nation have been cross contaminated with the substance. Forty-four states reported in the EPA-sponsored survey that they sample groundwater at leaking tank sites and test it for MTBE. Furthermore, 35 states reported that they find MTBE in groundwater at least 20 percent of the time they sample for it, and 24 states said that they find it at least 60 percent of the time. States are not only finding MTBE at tank sites with reported releases—half of the states reported finding it at tank sites even when there was no documented release, although they did not know the number of these cases. About half of the states also reported finding MTBE that they could not attribute to a leaking tank and suspected that it came from other sources, such as above-ground tanks used to store fuel. The extent of MTBE contamination may be understated because some tank releases go undetected and because only 19 states said that they are taking any extra steps to make sure that MTBE is not migrating further from a tank site than other contaminants when a release has been detected. MTBE is less likely to cling to soil than other gasoline components and dissolves more easily in water, allowing it to travel faster, farther, and sometimes deeper. Therefore, parties might have to use more test wells around a leaking tank to determine if and where MTBE is present. If states do not conduct the extra tests, they may not detect the MTBE. Some of the states that have identified MTBE contamination have also found that it reached drinking water sources. More states may not have reported finding MTBE in part because only 24 states in the EPA- sponsored survey said that their drinking water program offices routinely analyzed drinking water sources for MTBE, while another 24 said that their offices were not conducting these analyses. Although a number of states were not sure how many public or private drinking water wells had been contaminated by MTBE, 11 states said that at least 10 public wells had been contaminated at the time of the survey, and 15 states reported that 10 private wells had been closed. The Maryland Department of the Environment reported that MTBE was found in low concentrations in about 100 of more than 1,200 water systems tested. In contrast, some communities in California, Kansas, and Maine have had more extensive problems with contaminated groundwater. For example, Santa Monica, California, closed seven wells supplying 50 percent of the city’s water. At the national level, the U.S. Geologic Survey (USGS) and EPA have conducted some water-monitoring efforts, but have yet to find high concentrations of MTBE in many drinking water sources. According to a USGS study, MTBE was detected in generally lower concentrations in 14 percent of surface water sources. Another USGS study points out, however, that it was 10 times more likely to find MTBE in areas that use it as a fuel additive to reduce pollution. A third USGS study, done in cooperation with EPA and issued in 2001, examined monitoring data from over 2,000 randomly selected community water systems in the northeast and mid-Atlantic regions and reported that MTBE was detected in about 9 percent of the systems that analyzed samples for MTBE. Finally, EPA has completed the first year of a 3-year effort—under the recently implemented Unregulated Contaminant Monitoring Rule—to have all large water systems (serving populations of 10,000 or more), as well as selected small public water systems (serving populations of 3,000 or less), test their water for MTBE. Of the one-third of the systems required to test in the first year, 1 of 131 large systems and 3 of the 283 small systems detected the substance. An interagency assessment of potential health risks associated with fuel additives to gasoline, primarily MTBE, concluded that while available data did not fully determine risks, MTBE should be regarded as a potential carcinogenic risk to humans. However, the extent that MTBE may be present in high concentrations in drinking water and jeopardizing public health is unknown. Because MTBE has a bad taste and odor at relatively low concentrations, people may not be able to tolerate drinking contaminated water in large enough quantities to pose a health risk. On the other hand, some people may become desensitized to the taste and smell and could end up drinking MTBE for years in their well water, according to the EPA program manager. EPA has efforts underway to fill in some of the data gaps on the health effects of MTBE and its occurrence in drinking water supplies. Additional research and water quality monitoring must be concluded before EPA can determine whether a water quality standard—an enforceable limit on the concentration of MTBE allowed in drinking water—is warranted. EPA has issued an advisory suggesting that drinking water should not contain MTBE in concentrations greater than 20 to 40 parts per billion, based on taste and odor concerns. EPA is considering taking further steps to regulate MTBE, but notes that to establish a federally enforceable standard could take about 10 years. While the potential health risks of MTBE are uncertain, 14 states—-9 of which are not required to use a fuel additive to limit air pollution in certain areas—have partially or completely banned the use of MTBE within their boundaries (see figure 2). In addition, seven states reported in the December 2000 EPA-sponsored survey that they had established their own health-based primary drinking water standard for MTBE, as shown in figure 3. Six of these states currently use fuel additives to limit air pollution and the seventh state voluntarily used such additives until 1999. Another five states reported establishing a secondary standard to limit the allowable amount of MTBE in drinking water. These standards vary considerably, however, with concentrations ranging from 5 to 70 parts per billion. According to the EPA-sponsored survey, 37 states said that finding gasoline, or its components of concern, in soil or groundwater at a tank site is the primary driver of cleanup activities, not the presence of MTBE. In other words, the methods used to clean up gasoline can also be used to address MTBE contamination. These proven cleanup technologies include pumping and treating groundwater at its source, treating the water at its point of use by running it through a filter, or using a process known as air sparging (injecting air into the contaminated area to volatilize and extract MTBE). Letting the contaminant naturally break down over time—known as natural attenuation—may not be as effective as with other components of gasoline because MTBE persists longer in soil and groundwater. However, addressing MTBE could add time and costs to cleanups. According to the EPA-sponsored survey, 16 states reported cost increases as a result of MTBE cleanup, most less than 20 percent; 5 states reported that their costs had doubled. States spent, on average, about $88,000 addressing releases at each tank site in fiscal year 2001. Nineteen states indicated that it could cost more to test for MTBE because they take additional steps to ensure that this contaminant is not migrating beyond other contaminants in a release. Several states reported that their laboratories charged $10 to $50 more per sample to analyze for MTBE. In addition, many of the 16 states that cited higher cleanup costs for MTBE attributed these increases to such factors as longer plumes and increased cleanup time. Finally, the discovery of MTBE can increase costs because filters used to remove MTBE from water have to be changed more frequently. States reported to EPA that as of the end of 2001, they had completed cleanups of 64 percent (267,969) of the 416,702 known releases at tank sites and had begun some type of cleanup action for another 26 percent (109,486), as figure 4 illustrates. Because states typically set priorities for their cleanups by first addressing those releases that pose the most risk, states may have already begun to clean up some of the worst releases to date. However, EPA tank program managers cautioned that some of the many cleanups that are underway may still be in their early stages because states have varying criteria for “underway.” For example, California reports a cleanup is underway as soon as a release is reported, even if no work has begun. In addition, states still have to address the remaining 39,247 known releases (9 percent) where cleanup is not underway by either ensuring it has begun or is not needed because the releases do not pose a risk. Figure 5 illustrates the remaining cleanup workload for known releases in each state and the District of Columbia. As the figure shows, while states have made progress, seven states still have more than 5,000 releases that they have not fully addressed. Most of the 13 states we contacted cited a lack of staff as a barrier to achieving more cleanups. For example, the May 2001 Vermont survey of state funding programs indicated that, on average across the states, each staff person was responsible for overseeing about 130 tank sites during that year. In addition to this known workload, states most likely will continue to face a potentially large but unknown future cleanup workload for a number of reasons: In a June 2000 report to the Congress, EPA estimated that as many as 200,000 tanks nationwide may be unregistered, abandoned, or both, and have not been assessed for leaks. Furthermore, even though many owners chose to close their tanks rather than upgrade them with leak detection and prevention equipment as federally required, tens of thousands of tanks nationwide are still empty and inactive, and have not been permanently closed, as we previously reported. Consequently, any leaks from these tanks may not have been identified. We also reported that an estimated 200,000 or more active tanks were not being properly operated or maintained, increasing the chance of a spill or leak. For example, 15 states reported that leak detection equipment was frequently turned off or improperly maintained. In addition, we reported that many states do not inspect their tanks frequently enough to ensure that they are not leaking and that known releases are reported. Only 19 states were physically inspecting all of their tanks at least once every 3 years—the minimum EPA considers necessary for effective tank monitoring. In addition, 22 states were not inspecting all of their tanks on any regular basis. While the number of leaks should decrease in the future—because all new of active tanks should have leak detection and prevention equipment—we previously reported that 14 states traced newly discovered leaks to upgraded tanks and 20 states did not know whether their upgraded tanks leaked. Finally, 10 states reported in the EPA-sponsored survey that they had reopened a small number of completed cleanups because MTBE had been subsequently detected. If more states follow suit, the future cleanup workload will increase, although the size of this workload is unknown. In addition, states may be responsible for the costs of these reopened cleanups because tank owners and operators are not required to maintain financial responsibility for tanks that were properly cleaned up or closed. States have relied primarily on their own funding programs and private parties to pay for cleanups, using the relatively small federal trust fund grants they receive for staff, program administration, and to a lesser extent, cleanups. States’ reliance on private and federal funding could increase in the future if they end their funding programs and begin to address the problem of abandoned tanks with no financially viable owner. In creating the Underground Storage Tank program, the Congress expected tank owners and operators to take financial responsibility for cleaning up contamination from their tanks, correcting environmental damage, and compensating third parties for any injuries. Tank owners and operators were to demonstrate that they had the financial resources to cover potential cleanup liabilities. Initially, private insurers were hesitant to take on the risks of providing liability coverage to owners and operators of underground storage tank systems, so many states created their own financial assurance funds. These state funds could be used to cover the financial responsibilities of owners and operators for site cleanup as long as long as the state funds met the federal financial responsibility requirements. Forty-seven states established such programs most often from a gasoline tax, an annual tank fee, or both, rather than state appropriations. The remaining three states relied on owners and operators to locate suitable insurance, now more readily available, or other financial resources. Under many state programs, owners or operators pay for the cleanup and seek reimbursement for a portion of the cleanup costs from the state. Six of the 13 states we contacted cap the amount of reimbursements and expect tank owners and operators to be financially liable for the remaining costs. In the May 2001 Vermont survey of state funding programs, states reported spending a cumulative $6.2 billion from their funds since their programs began (13 states did not report their costs). The amount of private funds spent on cleanups is unknown. At the time of the survey, 36 states reported having adequate funding to cover their current costs, but 11 other states said that they were about $625 million short of the funds necessary to cover known claims. Program managers in five of the 13 states we contacted said that their state funds were stable. In addition, nine states reported that eligibility for their programs had ended—meaning they would no longer accept any reimbursement claims for new releases—and another seven states expected eligibility to end by 2026. Furthermore, the program fees used to replenish state programs had expired in 1 state and were expected to expire in another 12 states within the next decade. As a result of these provisions, tank owners and operators would be responsible for cleanup costs with no state funding support. States have been using federal grants from the Leaking Underground Storage Tank Trust Fund primarily to pay for staff to oversee cleanups and pursue owners and operators so that they clean up their sites, according to the EPA program manager. States cannot use these federal funds to clean up releases when an owner or operator can pay. States spent $662.5 million in federal trust fund dollars from fiscal year 1987 through fiscal year 2001, roughly 10 percent of the expenditures from states’ funds during the same period. States used $19.5 million, or 36 percent, of the $58.7 million they received in fiscal year 2001 grants on cleanup (see figure 6). Of the 13 states we contacted, 7 said that their programs rely on the federal grants. On the other hand, for example, a program manager in Florida said that the state’s program does not depend on federal grants because it is a small amount of money compared with the amount coming from the state fund. Some states use their federal funds for staffing costs. However, a Maryland program official pointed out that the size of the annual federal grants to states has not kept pace with the salary and other costs they must cover for staff. An Indiana program official attributed a backlog of 4,000 cleanups at one point in the state’s program to a lack of federal funding that could be used to pay for additional staff. States may be using their federal trust fund grants to pay for staff because the use of these funds is more restrictive than the state funds, which can be used to reimburse tank owners for their cleanup costs, among other things. Six states have used an additional funding source that receives federal support to cover some cleanup costs, namely, their Clean Water State Revolving Funds. States get federal seed money to initiate and maintain this type of fund. Eligible parties can apply for loans under the fund and have used them to cover a variety of leak prevention and cleanup projects. According to the EPA, the six states using this vehicle have made a total of $84 million in loans for tank cleanups through June 2000. Program managers in 9 of the 13 states we contacted said that they did not expect to use their revolving loan fund for tank cleanups. In addition to the federal grants and loan funds, some states may look to the federal government in the future to help them clean up those abandoned tanks that pose health risks when financially viable parties cannot be identified to pay for cleanups. States admit that they do not often identify releases until they are closing or removing tanks, meaning that EPA and the states might inadvertently be underestimating the risks and cleanup workload that abandoned tanks pose. States may seek additional federal assistance to address abandoned tanks if state funding programs expire or are depleted. As of January 2002, states can access one new source of federal funding for abandoned tanks, made possible by the Small Business Liability Relief and Brownfields Revitalization Act. Under the act, the Congress authorized up to $50 million annually to clean up properties that may be contaminated by a petroleum release, including abandoned tanks. To respond to your questions, we primarily analyzed data (1) that states reported to EPA on the status of tank releases, (2) from the December 2000 report on the EPA-sponsored survey of state tank programs, and (3) from the May 2001 Vermont survey of state cleanup funding programs. In addition, we contacted 13 state tank program managers to discuss their cleanup workload, their concerns with MTBE, and their approach for funding cleanups. We selected these states because they had addressed the largest number of releases, had the largest backlog, or both. We also met with EPA tank program managers to discuss cleanup efforts. We performed our work from April to May 2002 in accordance with generally accepted government auditing standards.
To help limit air pollution, about a third of the states use gasoline that contains methyl tertiary butyl ether (MTBE), which burns cleaner. However, MTBE has migrated into wells and groundwater from leaking underground tanks used to store gasoline. The Environmental Protection Agency (EPA) has the responsibility through the Underground Storage Tank Program and works through the states to ensure that tanks do not leak and, if they do, that the contamination is cleaned up. To help states cover the program costs, Congress annually provides grants from a trust fund it created in 1986. Most of the 50 states have reported finding MTBE when they discover gasoline contamination in their tank sites and increasingly, in their groundwater, surface water, and drinking water. States have made progress in addressing the releases they have discovered, including MTBE contamination, but face a continuing and substantial cleanup workload. States typically depend on tank owners or operators to pay some of the cleanup costs and cover the remainder with their own funding programs and depend on relatively small federal trust fund grants to pay staff to oversee cleanups and administer their programs.
The U.S. government’s control over the export of defense and dual-use items is intended to ensure that U.S. interests are protected in accordance with the Arms Export Control Act and the Export Administration Act. The U.S. government’s control over the export of defense and dual-use items is primarily divided between two departments—State and Commerce, respectively (see table 1)—with support for enforcement activities primarily from Commerce, through its Bureau of Industry and Security’s Office of Export Enforcement (OEE); Department of Homeland Security, through its Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE); and Justice, through the FBI and the U.S. Attorneys Office. State and Commerce require exporters to identify items that are on the departments’ control lists and to obtain license authorization from the appropriate department to export these items, unless an exemption applies. Exemptions are permitted under various circumstances, such as allowing for the export of certain items to Canada without a license. Many dual-use items are exempt from licensing requirements. While items can be exempt from licensing requirements, they are still subject to U.S. export control laws. Because exporters are responsible for complying with export control laws and regulations, regulatory and investigative enforcement agencies conduct outreach to educate exporters on these laws and regulations. When shipping controlled items, exporters are required to electronically notify CBP officials at the port where the item will be exported, including information on the quantity and value of the shipment, the issued export license number, or an indication that the item is exempt from licensing requirements. Export enforcement aims to ensure U.S.-controlled items do not fall into the wrong hands and to limit the possibility that illegal exports will erode U.S. military advantage. Export enforcement involves inspecting items to be shipped, investigating potential violations of export control laws, and punishing export control violators. When inspectors, investigators, and prosecutors have questions about whether an item is controlled and requires a license, they request a license determination. CBP and ICE request license determinations through ICE’s Exodus Command Center, which refers the request to State and Commerce; OEE requests determinations directly from Commerce licensing officers. Some FBI agents request license determinations through the Exodus Command Center, while others make such requests directly to State or Commerce. In fiscal year 2005, Department of Justice data showed that there were more than 40 individuals or companies convicted of over 100 criminal violations of export control laws. State reported over $35 million and Commerce reported $6.8 million in administrative fines and penalties for fiscal year 2005. See appendix II for a list of selected export control cases. For more than a decade, we have reported on a number of weaknesses and vulnerabilities in the U.S. export control system and made numerous recommendations, several of which have not been implemented. For example, in September 2002, we reported that Commerce improperly classified some State-controlled items as Commerce-controlled, increasing the risk that defense items would be exported without the proper level of review and control to protect national interests. In June 2006, we reported that this condition remains unchanged and that Commerce has not taken the corrective actions that we recommended in 2002. We have also reported on long-standing problems in enforcement, including poor cooperation among the investigative agencies. Enforcing U.S. export control laws and regulations is inherently complex. Multiple agencies are involved in enforcement and carry out various activities, including inspecting shipments, investigating potential export control violations, and taking punitive actions that can be criminal or administrative against violators of export control laws and regulations. Authorities for export control enforcement are provided through a complex set of laws and regulations. These authorities and some overlapping jurisdiction for conducting enforcement activities add to the complexity. Enforcement—which includes inspections, investigations, and punitive actions against violators of export control laws—is largely conducted by various agencies within Commerce, Homeland Security, Justice, and State depending on the facts and circumstances of the case. These agencies’ key enforcement responsibilities are shown in table 2. Inspections of items scheduled for export are largely the responsibility of CBP officers at U.S. air, sea, and land ports, as part of their border enforcement responsibilities. To help ensure that these items comply with U.S. export control laws and regulations, CBP officers check items against applicable licenses prior to shipment, selectively conduct physical examinations of cargo at the port and in warehouses, review shipping documents, detain questionable shipments, and seize items being exported illegally. As part of their responsibilities, CBP officers are required by State to decrement (reduce) the shipment’s quantity and dollar value from the total quantity and dollar value authorized by the exporter’s license. This process helps to ensure that the shipment does not exceed what is authorized and that the license has not expired. However, Commerce does not require CBP officers to decrement Commerce licenses. Commerce officials said they have shipping tolerances that allow exporters to ship controlled items exceeding the quantity and value approved in a license, but this varies based on the controlled item. CBP officers do not currently have a formal means for determining if exporters have exceeded authorized license quantities and values for dual-use items within any shipment tolerances permitted for that controlled item. As a result, they cannot ensure accountability on the part of exporters or that Commerce regulations have been properly followed. CBP has an automated export system, which is used for decrementing State licenses. This system has built-in tolerances to allow the shipment to exceed the total value of a State license by 10 percent, as permitted by regulations. Investigations of potential violations of export control laws for dual-use items are conducted by agents from OEE, ICE, and FBI. Investigations of potential export violations involving defense items are conducted by ICE and FBI agents. FBI has authority to investigate any criminal violations of law in certain foreign counterintelligence areas. The investigative agencies have varying tools such as undercover operations and overseas investigations for investigating potential violations and establishing cases for potential criminal or administrative punitive actions. Punitive actions, which are either criminal or administrative, are taken against violators of export control laws and regulations. Criminal violations are those cases where the evidence shows that the exporter willfully and knowingly violated export control laws. U.S. Attorneys Offices prosecute criminal cases in consultation with Justice’s National Security Division. These cases can result in imprisonment, fines, forfeitures, and other penalties. Punitive actions for administrative violations can include fines, suspension of an export license, or denial or debarment from exporting, and are imposed primarily by State or Commerce, depending on whether the violation involves the export of a defense or a dual-use item. In some cases, both criminal and administrative penalties can be levied against an export control violator. The export control and investigative enforcement agencies also conduct outreach activities, primarily educating exporters on U.S. export control laws and regulations. For example, in fiscal year 2005, ICE agents conducted more than 1,500 industry outreach visits around the country. Outreach activities can include seminars and programs, specialized training, publications, advice lines, Web sites, and individual meetings with industry, academia, and other government agencies. These activities can result in companies self-disclosing violations, tips and reports of potential violations by others, and cooperation in investigations and intelligence gathering. Authorities for export control enforcement are provided through a complex set of laws and regulations. For defense items, authorities are granted under the Arms Export Control Act, the Department of Justice Appropriations Act of 1965, the USA Patriot Improvement and Reauthorization Act, and the Foreign Wars, War Materials and Neutrality Act. These statutes and the regulations stemming from them give concurrent jurisdiction for investigations to ICE and FBI (see fig. 1). For dual-use items, authorities are granted under the Export Administration Act, the International Emergency Economic Powers Act, the Department of Justice Appropriations Act of 1965, the USA Patriot Improvement and Reauthorization Act, and the Foreign Wars, War Materials and Neutrality Act. These laws and their implementing regulations give investigative authority for dual-use items to OEE as well as to ICE and FBI, which also have investigative authority for defense items (see fig. 2). Several key challenges exist in enforcing export control laws—challenges that potentially reduce the effectiveness of enforcement activities. First, overlapping jurisdiction for investigating potential export control violations and instances where coordination among the investigative agencies has not been effective have had an impact on some cases. Second, license determinations—which confirm whether an item is controlled by State or Commerce, and thereby help confirm whether a violation has occurred—are key to ensuring the pursuit of enforcement activities and are dependent on complete and specific information available at the time. Third, prosecuting export control cases can be difficult, since securing sufficient evidence to prove the exporter intentionally violated export control laws can represent unique challenges in some cases. Finally, multiple and sometimes competing priorities have made it difficult for enforcement agencies to maximize finite resources in carrying out export control enforcement responsibilities. While ICE, OEE, and FBI have jointly coordinated on investigations, coordination can be challenging, particularly in terms of agreeing on how to proceed with a case. Formal agreements for coordinating investigations do not exist among all the investigative agencies. The extent to which agencies coordinate and cooperate on investigations is largely dependent on individual work relationships. Agencies have sometimes not agreed on how to proceed on cases, particularly those involving foreign counterintelligence. For example, FBI and OEE agents disagreed as to whether certain dual-use items planned for export warranted an investigation. Specifically, without coordinating with OEE and ICE, FBI pursued the investigation, arrested the exporter, and held the shipment of items, valued at $500,000. Ultimately, criminal charges were not pursued because the items did not require a license. With respect to foreign counterintelligence cases involving export controls, investigators have not always been certain about their respective roles on these cases. Formal agreements for coordination do not exist among all the investigative agencies. Specifically, ICE and FBI do not have a formal agreement to coordinate cases involving export control violations. Formal agreements that exist have not been updated in recent years. In 1983, Commerce entered into an agreement with the FBI dealing with certain headquarters-level coordination functions. In addition, a 1993 agreement between Customs and Commerce outlines the investigative responsibilities of each agency, but it does not reflect departmental changes that occurred as a result of the establishment of Homeland Security in March 2003. This agreement also directs these agencies to enter a joint investigation when it is determined that more than one agency is working on the same target for the same or related violations. However, it can be difficult to determine whether these conditions exist because these agencies do not always have full access to information on ongoing investigations. According to several agents we spoke with, sharing information on ongoing investigations in general can be challenging because of the agencies’ varying and incompatible databases, the sensitivity of certain case information, and the agencies’ varying protocols for classifying information. The extent to which agencies coordinate their investigative efforts in the field can depend on individual work relationships and informal mechanisms that facilitate communication. Some field locations have established joint task forces to discuss investigative cases. For example, OEE, ICE, and FBI agents in one field location told us that they routinely collaborate on investigations as part of a joint task force that meets monthly. Agents in another location recently established a task force to locally coordinate export control investigations. In addition, some agencies have agents on detail to other investigative agencies. For example, in one field location, an ICE agent is detailed to FBI to coordinate cases and share export control information. FBI officials told us the detail has been useful because the ICE agent can readily provide FBI access to certain Homeland Security data, which saves critical investigative time for the FBI agents. At another field location, an OEE agent has been on detail at ICE for 7 years, which has facilitated information sharing and joint cases between the two agencies. According to several agents with whom we spoke, personalities can be a key factor in how well agents from different agencies work together on investigations. For example, an OEE agent in charge of one field location told us that the field agents work effectively on cases with ICE agents in one field location, but not with ICE agents in another field location because of disagreements stemming from 15 years ago about how to proceed with investigations. Confirming whether a defense or dual-use item is controlled and requires a license, known as a license determination, is integral to enforcement agencies’ ability to seize items, pursue investigations, or seek prosecutions. However, confirmation can sometimes be difficult. Many inspectors and investigators told us that the time it takes to make determinations or sometimes changes to previously made determinations can affect some of their enforcement activities. According to Commerce and State officials, they depend on complete, specific, and pertinent information from the inspectors and investigators to make timely and correct determinations so that appropriate enforcement actions can be pursued. Moreover, new or additional information may become available as an investigation proceeds, which can affect a license determination. Some inspectors and investigators—including OEE field agents who request license determinations directly from Commerce—stated that obtaining license determination decisions can be time consuming and has taken as much as several months. In several instances, State and Commerce licensing officers needed more information about the item before making a license determination, which added to the time it took to respond. In addition, State officials said they often request technical support from the Department of Defense when making determinations for defense items, which can add to the time it takes to make a license determination. We found that responses to requests for license determinations ranged from 1 day to 8 months during fiscal year 2005. While State established in September 2004 a goal of 30 days for processing license determinations, it revised this time frame to 60 days in April 2005 because of resource limitations. Commerce recently established a 35-day time frame to make a license determination requested by OEE agents. However, Commerce, in conjunction with the Exodus Command Center, has not established goals or a targeted time frame for responding to license determination requests. Goals help establish transparency and accountability in the process. While some inspectors and investigators told us that their enforcement actions have been affected by unclear determinations or changes to previously made license determinations, Commerce and State officials said that determinations are dependent on such factors as the completeness and specificity of the information presented to them at the time of the request. In one instance, CBP officers were not given a clear determination as to whether the item was controlled, leaving officers to decide how to proceed. In other instances, investigators dropped their cases or pursued other charges based on changes made to the determination or inconsistent information provided to the exporter. For example, OEE agents executed search warrants based on a license determination that the equipment was controlled for missile technology and antiterrorism purposes. Subsequently, Commerce determined that no license was required for this equipment, and thereby the case was closed. In another example, licensing officers provided OEE agents with a license determination that differed from the commodity classification provided to the exporter. As a result of the inconsistency between the license determination and classification, Commerce pursued a lesser charge against the exporter. In addition, in June 2005, ICE led a joint investigation of a Chinese national for allegedly exporting critical U.S. technology to China, and on the basis of an initial license determination review by State that the item was controlled, ICE obtained search and arrest warrants. However, 9 months later, ICE agents requested a subsequent license determination to confirm that the item was controlled. It was determined that the item was not subject to State or Commerce export control, and therefore the case was dropped. Both State and Commerce headquarters officials stated that their ability to make license determinations is dependent upon several factors, including the completeness and accuracy of the information provided by the inspectors and investigators at the time of the request. These determinations can be subject to change as new or additional pertinent information becomes available as the case proceeds. Commerce and ICE have recently taken actions to address problems in the license determination process. In June 2006, Commerce established new procedures on how to request and process license determinations internally and is currently revising and providing training for its licensing officers and OEE agents. In August 2006, ICE’s Exodus Command Center implemented a new system, known as the Exodus Accountability Referral System, to track license determination requests, provide enforcement agencies access to the status of their requests, and provide performance statistics to field agents, inspectors, and regulatory agencies. These actions recognize some of the problems with license determinations. However, it is too early to determine their impact on export enforcement activities. When developing a case for criminal prosecution, Assistant U.S. Attorneys (AUSA) must obtain sufficient evidence of the exporter’s intent to violate export control laws. Gathering evidence of intent is particularly difficult in export control cases, especially when the item being exported is exempted from licensing or the case requires foreign cooperation. For dual-use violations, Commerce officials said that the lapsed status of the Export Administration Act has made it cumbersome for prosecuting cases. When pursuing administrative cases, State, unlike Commerce, has limited access to attorneys and an Administrative Law Judge, making it challenging to pursue the full range of administrative actions against export control violators. Several AUSAs who prosecute many different types of cases, told us that it can be challenging to secure sufficient evidence that an exporter intentionally violated export control laws. In particular, securing such evidence can be especially difficult when the items to be exported are exempted from licensing requirements. We previously reported similar concerns of officials from Customs (now within Homeland Security) and Justice about investigating and prosecuting violations when exemptions apply, noting that it is particularly difficult to obtain evidence of criminal intent since the government does not have license applications and related documents that can be used as proof that the violation was committed intentionally. Investigations and prosecutions that involve items and individuals in foreign locations can further complicate evidence-gathering efforts. According to ICE officials, a foreign government may or may not cooperate in an overseas export control investigation or arrest, and foreign and U.S. laws on export controls may differ as to what constitutes a violation. One OEE field office estimated that over half of its cases involve foreign persons or entities. According to Commerce officials, enforcement of dual-use export controls under the expired Export Administration Act is a key challenge for them because it adds an element of complexity to cases and can encumber prosecutions. These officials said they have encountered difficulties convincing AUSAs to accept cases to prosecute under a set of regulations, promulgated under a lapsed statute and kept in force by emergency legislation. To counter these difficulties, Commerce, Homeland Security, and Justice officials said they support the renewal of the Export Administration Act. Commerce stated that renewal of this act would provide enforcement tools to OEE for conducting investigations and increase penalty provisions for violators. For administrative actions, export control regulations allow both State and Commerce to pursue administrative cases before an Administrative Law Judge, but State has never exercised this authority. Commerce officials stated that they bring cases before an Administrative Law Judge when an alleged export violator disputes the charges or objects to the administrative settlement actions proposed by Commerce. Commerce has a formal agreement with the Coast Guard Office of Administrative Law Judges, which is renewed annually, to hear its cases, and Commerce’s attorneys bring about one to three administrative cases before an Administrative Law Judge each year. State has never brought a case to an Administrative Law Judge and does not have attorneys with the experience needed to pursue such export control cases or a standing agreement with any agency to provide an Administrative Law Judge. In cases where an agreed settlement with the violating company appears unlikely and a formal hearing is needed, State would have to seek services from attorneys in the private sector or from other departments to help represent the government’s interests. To obtain access to an Administrative Law Judge to hear a case, State officials told us they would need to first request the Office of Personnel Management to appoint a judge on a temporary basis. State would then need to establish an interagency memorandum of understanding with that agency to establish payment and other arrangements. Without a formal agreement to access an Administrative Law Judge and ready access to attorneys to pursue such cases, State officials told us that it is challenging to proceed with administrative cases. State officials indicated that they are exploring various options on how to get access to attorneys with relevant experience to handle such cases, including seeking assistance from other departments on a temporary basis. However, State’s options appear to rely on ad hoc interagency arrangements and would not build any internal expertise for handling such cases in the future. Each enforcement agency’s priorities—and the resources allocated to those priorities—are influenced by the mission of the department in which the agency resides. At times, agencies have competing priorities, making it difficult to effectively leverage finite enforcement personnel. Limited training on export controls has further challenged agencies to use their enforcement personnel effectively. Some agencies have recently taken actions to target more resources to export enforcement activities. However, it may be too early to determine the impact these actions will have in the long term. In addition, priorities could shift and necessitate the reassignment of staff. The investigative agencies have been particularly challenged to effectively leverage their resources. Commerce’s overall mission is to promote U.S. economic development and technological advancements. OEE resides within Commerce’s export control agency, and its priorities emphasize investigating potential violations of dual-use exports related to weapons of mass destruction, terrorism, and unauthorized military end use. In carrying out these priorities, some of OEE’s nine field offices—which are responsible for conducting investigations in multiple states, ranging from 3 to 11 states—have had difficulty pursuing investigative leads outside their home state. Some OEE field agents told us that not having a physical presence in the other states adversely affects their ability to generate investigative leads, and that their caseload is largely within their home state. Homeland Security’s mission is to create a unified national effort to secure the country while permitting the lawful flow of immigrants, visitors, and trade. ICE is the largest investigative branch within Homeland Security. In addition to investigating potential defense and dual-use export violations, ICE investigates drug smuggling, human trafficking and smuggling, financial crimes, commercial fraud, document fraud, money laundering, child exploitation, and immigration fraud. ICE has recently taken action to expand its existing investigation workforce devoted to export control. As of September 2006, ICE data showed that total arrests, indictments, and convictions had surpassed the totals in each fiscal year since ICE’s creation in 2003. Justice’s overall mission is to enforce U.S. laws, and FBI’s mission is to protect the United States against terrorist and foreign intelligence threats and to enforce criminal laws. As the lead counterintelligence agency in the United States, FBI investigates potential dual-use and defense export violations that have a nexus with foreign counterintelligence. FBI has over 456 domestic offices. Fifty-six offices are required to have at least one team of agents devoted to counterintelligence. These teams cover all 50 states, and some agents are located within the 456 domestic offices. FBI agents are also responsible for conducting other investigations involving espionage and counterproliferation. CBP, the sole border inspection agency, has also been challenged to leverage its resources. One of CBP’s primary responsibilities is to detect and prevent terrorists and terrorist weapons from entering U.S. ports, and it devotes most of its resources to inspecting items and persons entering the country. For items leaving the United States, CBP uses an automated targeting system to identify exports for examination by its officers. The workload and the number of officers assigned to inspect exported cargo can fluctuate daily. For example, at one of the nation’s busiest seaports, the CBP Port Director stated that there can be five officers assigned to inspecting exports one day and none the next. Export enforcement efforts are further challenged by the limited time officers have to review shipment documentation. State regulations require 24 hours’ advance notification before shipment for ship or rail and 8 hours’ advance notification for plane or truck. However, Commerce regulations do not have time frames specified other than Census Bureau requirements of notification prior to departure. Moreover, some officers also spend some of their limited time hunting down items on planes or in shipping containers because documents, such as air waybills, cannot be located or information on items to be exported is incomplete. CBP officials stated that they have internal initiatives under way to address resources devoted to export control inspections. U.S. Attorneys offices have many competing priorities, including prosecuting cases involving terrorism, counterterrorism, and government contractor fraud. Each of the U.S. Attorneys offices has attorneys who can work on cases involving potential export control violations. However, several investigators noted that the level of interest in and knowledge of export control laws varies among AUSAs. According to several enforcement agency officials, they would like more advanced training on export controls that could help them use their time more efficiently—and thereby better leverage finite resources—but such training is limited. While some specialized training has been provided to officers in the field, CBP has reduced the number of training courses directly relating to export controls for the last quarter of fiscal year 2006 primarily because of budget constraints. CBP officials said they are considering restructuring the training curriculum. ICE and FBI investigators also said that they would like more opportunities for advanced training on export controls. While ICE headquarters has not funded its advanced strategic export controls course at the Federal Law Enforcement Training Center for the past 2 years, it reinstated this course in May 2006 and has subsequently trained over 100 agents. ICE officials also noted that training on weapons of mass destruction was provided to over 2,000 agents and analysts during fiscal years 2005 and 2006. Commerce plans additional training for OEE agents in fiscal year 2007. Justice, recognizing a need for training on export controls for its attorneys, provided a training conference in May 2006 for AUSAs, with presentations from Justice, Commerce, State, and the intelligence community. Commerce, State, and Justice have also recently sponsored training conferences for enforcement agencies covering topics such as export control laws and regulations, license determinations, and proving criminal intent. Criminal indictments and convictions are key to informing the export control process and licensing decisions. While Justice and the other enforcement agencies have databases to capture information relating to their own export enforcement activities (see table 3), outcomes of criminal cases are not systematically shared with State and Commerce. State and Commerce officials stated that information on the outcomes of criminal cases, including indictments and convictions, is important to the export licensing process, particularly since indicted or convicted exporters may be denied from participating in the process. The Arms Export Control Act requires that appropriate mechanisms be developed to identify persons who are the subject of an indictment or have been convicted of an export control violation. Specifically, if an exporter is the subject of an indictment or has been convicted under various statutes, including the Export Administration Act, State may deny the license application. Further, Commerce can deny export privileges to an exporter who has been criminally convicted of violating the Export Administration Act or Arms Export Control Act. According to both State and Commerce officials, information on indictments and convictions is gathered through an informal process. For example, an ICE agent, who serves as a liaison with State and is colocated with State’s export control officials, compiles criminal statistics from ICE field offices in a monthly report that is shared with State compliance officials. Information on criminal export control prosecution outcomes could help inform the export control process by providing a complete picture of the individual or company seeking an export license or trends in illegal export activities. Agencies responsible for enforcement have to operate within the construct of a complex export control system, which offers its own set of challenges from the outset. Further compounding this situation is the failure to coordinate some investigations and address a host of other challenges that can lead to a range of unintended outcomes, such as the termination of investigative cases. At a minimum, limited resources available for enforcement efforts may not be used effectively. Consequently, there is a need to ensure that enforcement agencies maximize finite resources and efforts to apprehend and punish individuals and companies who illegally export sensitive items that may be used to subvert U.S. interests. To enhance coordination in the current system, we recommend that the Secretary of Commerce direct the Under Secretary for Industry and Security, the Secretary of Homeland Security direct the Assistant Secretary of Homeland Security for U.S. Immigration and Customs Enforcement, and the Attorney General direct the Director of the FBI in conjunction with the Assistant Attorney General in charge of the National Security Division to take the following two actions: establish a task force to evaluate options to improve coordination and cooperation among export enforcement investigative agencies, such as creating new or updating existing operating agreements between and among these agencies, identifying and replicating best practices for routinely collaborating on or leading investigations, and establishing a mechanism for clarifying roles and responsibilities for individual export control cases involving foreign counterintelligence, and report the status of task force actions to Congress. To ensure discipline and improve information needed for license determinations, we recommend that the Secretary of Homeland Security direct the Assistant Secretary of Homeland Security for U.S. Immigration and Customs Enforcement and the Secretary of Commerce direct the Under Secretary for Industry and Security to establish goals for processing license determinations. We also recommend that that Secretary of Homeland Security direct the Assistant Secretary of Homeland Security for U.S. Immigration and Customs Enforcement, the Secretary of Commerce direct the Under Secretary for Industry and Security, and the Secretary of State direct the Deputy Assistant Secretary for Defense Trade Controls to coordinate with licensing officers, inspectors, investigators, and prosecutors to determine what additional training or guidance is needed on license determinations, including the type of information needed to make license determinations. To ensure systematic reconciliation of shipments with Commerce licenses, we recommend that the Secretary of Commerce direct the Under Secretary for Industry and Security, in consultation with the Commissioner of Homeland Security’s U.S. Customs and Border Protection, to determine the feasibility of establishing a requirement for CBP to decrement Commerce licenses and an action plan for doing so. To ensure that State and Commerce have complete information on enforcement actions, we recommend that the Attorney General direct the Director of the Executive Office for U.S. Attorneys, in consultation with the Assistant Attorney General in charge of the National Security Division, to establish formal procedures for conveying criminal export enforcement results to State’s Directorate of Defense Trade Controls and Commerce’s Bureau of Industry and Security. The Departments of Commerce, Homeland Security, and State provided comments on a draft of this report. Justice and Defense did not provide formal comments. Commerce, Homeland Security, Justice, and State also provided technical comments, which we incorporated in this report as appropriate. Overall, the departments providing comments agreed with the need for coordination, but in some instances, noted some differences in possible approaches. They also indicated that certain actions were already under way to address some of our recommendations. We modified one recommendation accordingly. In commenting on our first recommendation—to establish a task force to improve coordination and cooperation among export enforcement investigative agencies and report the status of task force actions to the Congress—Commerce stated that it was already taking action to improve coordination through various work groups and acknowledged that it will continue to seek ways to improve coordination. Commerce also commented that the draft report does not provide the data and analysis to support that there is a lack of coordination. We disagree. We spoke with numerous agents in the field who cited coordination as a challenge. The examples we provided were illustrations of some of the types of coordination challenges that existed. Our evidence indicates that coordination is a challenge given that three agencies with differing approaches have concurrent jurisdiction to investigate potential violations of export control laws. At times, these agencies have competing priorities, making it difficult to leverage finite enforcement personnel for oftentimes complex cases. Homeland Security agrees in principle with our first recommendation, but believes the establishment of an Export Enforcement Coordination Center within ICE would address coordination concerns in the most immediate and comprehensive manner. Homeland Security’s solution is one option for improved coordination. However, it would need to work with the other enforcement agencies to determine the viability of this option. Our recommendation for a joint task force is the means by which to do so. In its technical comments related to coordination, Justice commented that FBI looks forward to working closely with other export enforcement agencies. In its comments on our second recommendation—to establish goals for the processing of license determinations and coordinate with other enforcement officials to determine what additional training or guidance is needed on license determinations—Commerce noted it was already taking action to improve license determination efforts through developing procedures and leading and participating in training conferences on export enforcement. However, these actions do not fully address our recommendation on establishing goals. Specifically, Commerce has not established formal license determination response times in conjunction with the Exodus Command Center, which is a key means by which license determination requests are processed. Homeland Security agreed to support goal setting by providing input from a law enforcement perspective. In its comments on our draft report, State indicated that it had already established goals for processing license determinations in conjunction with the Exodus Command Center. As a result, we revised our recommendation to direct that Commerce and Homeland Security establish goals for processing license determinations. State concurred with our recommendation to determine what additional training or guidance is needed on license determinations. Specifically, State has agreed with Homeland Security to update and clarify its guidance on license determinations. State further noted that consulting with FBI and ICE regarding additional training for coordinating State’s support to their criminal investigations would build upon its past and ongoing work in this area. Regarding our third recommendation—to determine the feasibility of having Homeland Security’s Customs and Border Protection officers decrement Commerce export licenses—Commerce expressed some reservation. Specifically, Commerce stated that it has seen no data to indicate that the underlying issue is of sufficient enforcement concern and that automated systems would need to be developed within CBP to support this effort. We do not believe that Commerce should dismiss this recommendation without further analysis. We previously reported that Commerce has not conducted comprehensive analyses of items that have been exported; therefore it is not in a position to know whether it is an enforcement concern. In addition, while resources devoted to outbound enforcement are limited within CBP, it has an automated export system, which is used for decrementing State licenses. This allows CBP officers to ensure accountability on the part of exporters and that State regulations have been properly followed. Homeland Security commented that CBP officials are prepared to act when contacted by Commerce regarding our recommendation. With respect to our last recommendation—that Justice establish formal procedures for conveying export enforcement results to State and Commerce—Commerce agreed, citing that it supports efforts to improve coordination and communication. Justice indicated support for sharing such information. State also supports this recommendation and noted that it welcomed any additional information that Justice can provide regarding the outcomes of criminal cases involving export control and related violations to help State carry out its regulatory responsibilities. Formal written comments provided by Commerce, Homeland Security, and State are reprinted in appendixes III, IV and V, respectively. We are sending copies of this report to interested congressional committees, as well as the Secretaries of Commerce, Defense, Homeland Security, and State; the Attorney General; the Director, Office of Management and Budget; and the Assistant to the President for National Security Affairs. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 or calvaresibarra@gao.gov if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Others making key contributions to this report are listed in appendix VI. To describe the roles, responsibilities, and authorities of the agencies responsible for export control enforcement of defense and dual-use items, we interviewed cognizant officials, examined relevant documents, and analyzed export control statutes. We interviewed officials about their enforcement roles and responsibilities at the headquarters of the Departments of Commerce, Homeland Security, Justice, and State. We also discussed with Department of Defense officials their role in providing investigative support to agencies responsible for export control enforcement. We developed and used a set of structured questions to interview over 115 inspectors, investigators, and prosecutors in selected locations and observed export enforcement operations at those locations that had air, land, or seaports. We selected sites to visit based on various factors including geographic areas where all enforcement agencies were represented and areas with a mix of defense and high-tech companies represented; field offices with a range of investigative tools available to agents; and experience levels inspectors, agents, and prosecutors had in enforcing export control laws and regulations. On the basis of these factors, we visited Irvine, Long Beach, Los Angeles, Oakland, Otay Mesa, San Diego, San Francisco, and San Jose, California; Fort Lauderdale and Miami, Florida; Boston, Massachusetts; Newark and Trenton, New Jersey; and New York, New York. Additionally, we examined agent operating manuals, inspector handbooks, and a Federal Register notice, which further define the roles and responsibilities of enforcement agencies. To describe export enforcement authorities, we analyzed various statutes and identified the varying enforcement requirements promulgated through implementing regulations. Our structured interviews with officials also enabled us to identify challenges agencies faced in enforcing export control laws and regulations. We documented examples of coordination challenges among the investigative agencies and obtained and summarized information on significant export control cases identified by these agencies. We also examined existing memorandums of understanding and agency guidance on coordinating investigations. To document the impact that license determinations have on enforcement activities, we obtained examples of license determinations through Commerce’s Office of Export Enforcement and conducted a case file review of license determinations for fiscal year 2005 at Homeland Security’s Exodus Command Center based on available information on-site. We selected a cross-section of license determination files to review that included requests for defense and dual-use items and that varied in response times. We discussed with Commerce and Homeland Security officials efforts to improve the license determination process. Through our interviews, we also identified challenges with criminal prosecutions and confirmed with headquarters the difficulties in obtaining sufficient evidence. Finally, we identified challenges with agencies’ priorities and human resources and obtained information on staffing levels and priorities for assigning resources. We also reviewed agency training materials to identify export control training available to enforcement personnel. To assess whether information on criminal enforcement outcomes is provided to export control agencies, we identified export control enforcement information maintained at the various agencies, such as criminal convictions and indictments for violations of export control laws. We also spoke with State licensing and policy officials and Commerce officials to assess whether they received and used this information for informing licensing or other decisions for defense or dual-use items. We also spoke with Justice officials to determine whether the department systematically provided criminal export control prosecution outcome information to State and Commerce export control agencies. For fiscal year 2005, investigative agencies identified several examples of export control enforcement cases, as shown in table 4. The following are GAO’s comments from the Department of Commerce letter dated October 3, 2006. 1. While OEE and ICE had prior contact with FBI before it pursued this case, the agencies did not act together in a concerted way to determine the best way to proceed with this case. 2. Commerce believed that its actions prevented the commission of an export violation. However, its lack of coordination in taking this action undermined enforcement activities. In addition to the contact name above, Anne-Marie Lasowski, Assistant Director; Matthew Cook; Lisa Gardner; Arthur James, Jr.; Karen Sloan; Lillian Slodkowski; Suzanne Sterling; and Karen Thornton made key contributions to this report. Defense Technologies: DOD’s Critical Technologies List Rarely Informs Export Control and Other Policy Decisions. GAO-06-793. Washington, D.C.: July 28, 2006. Export Controls: Improvements to Commerce’s Dual-Use System Needed to Ensure Protection of U.S. Interests in the Post-9/11 Environment. GAO-06-638. Washington, D.C.: June 26, 2006. Defense Trade: Arms Export Control Vulnerabilities and Inefficiencies in the Post-9/11 Security Environment. GAO-05-468R. Washington, D.C.: April 7, 2005. Defense Trade: Arms Export Control System in the Post-9/11 Environment. GAO-05-234. Washington, D.C.: February 16, 2005. Foreign Military Sales: DOD Needs to Take Additional Actions to Prevent Unauthorized Shipments of Spare Parts. GAO-05-17. Washington, D.C.: November 9, 2004. Nonproliferation: Improvements Needed to Better Control Technology Exports for Cruise Missiles and Unmanned Aerial Vehicles. GAO-04-175. Washington, D.C.: January 23, 2004. Export Controls: Post-Shipment Verification Provides Limited Assurance That Dual-Use Items Are Being Properly Used. GAO-04-357. Washington, D.C.: January 12, 2004. Nonproliferation: Strategy Needed to Strengthen Multilateral Export Control Regimes. GAO-03-43. Washington, D.C.: October 25, 2002. Export Controls: Processes for Determining Proper Control of Defense- Related Items Need Improvement. GAO-02-996. Washington, D.C.: September 20, 2002. Export Controls: Department of Commerce Controls over Transfers of Technology to Foreign Nationals Need Improvement. GAO-02-972. Washington, D.C.: September 6, 2002. Export Controls: More Thorough Analysis Needed to Justify Changes in High-Performance Computer Controls. GAO-02-892. Washington, D.C.: August 2, 2002. Export Controls: Rapid Advances in China’s Semiconductor Industry Underscore Need for Fundamental U.S. Policy Review. GAO-02-620. Washington, D.C.: April 19, 2002. Defense Trade: Lessons to Be Learned from the Country Export Exemption. GAO-02-63. Washington, D.C.: March 29, 2002. Export Controls: Issues to Consider in Authorizing a New Export Administration Act. GAO-02-468T. Washington, D.C.: February 28, 2002. Export Controls: Actions Needed to Improve Enforcement. GAO/NSIAD-94-28. Washington, D.C.: December 30, 1993.
Each year, billions of dollars in dual-use items--items that have both commercial and military applications--as well as defense items are exported from the United States. To protect U.S. interests, the U.S. government controls the export of these items. A key function in the U.S. export control system is enforcement, which aims to prevent or deter the illegal export of controlled items. This report describes the roles, responsibilities, and authorities of export control enforcement agencies, identifies the challenges these agencies face, and determines if information on enforcement outcomes is provided to the export control agencies. GAO's findings are based on an examination of statutes, interagency agreements, and procedures; interviews with enforcement officials at selected field locations and headquarters; and an assessment of enforcement information. The enforcement of export control laws and regulations is inherently complex, involving multiple agencies with varying roles, responsibilities, and authorities. The agencies within the Departments of Commerce, Homeland Security, Justice, and State that are responsible for export control enforcement conduct a variety of activities, including inspecting items to be exported, investigating potential export control violations, and pursuing and imposing appropriate penalties and fines against violators. These agencies' enforcement authorities are granted through a complex set of laws and regulations, which give concurrent jurisdiction to multiple agencies to conduct investigations. Enforcement agencies face several challenges in enforcing export control laws and regulations. For example, agencies have had difficulty coordinating investigations and agreeing on how to proceed on cases. Coordination and cooperation often hinge on the relationships individual investigators across agencies have developed. Other challenges include obtaining timely and complete information to determine whether violations have occurred and enforcement actions should be pursued, and the difficulty in balancing multiple priorities and leveraging finite human resources. Each enforcement agency has a database to capture information on its enforcement activities. However, outcomes of criminal cases are not systematically shared with State and Commerce, the principal export control agencies. State and Commerce may deny license applications or export privileges of indicted or convicted export violators. Without information on the outcomes of criminal cases, export control agencies cannot gain a complete picture of an individual or a company seeking export licenses or discover trends in illegal export activities. This report is a publicly releasable version of a law enforcement sensitive report we issued on November 15, 2006. Therefore, some examples that involved law enforcement techniques or methods and that support our findings have been removed from this version.
In September 2003, the Bureau broke from its tradition of releasing its Income and Poverty Estimates on a Tuesday or Thursday at a news conference at the National Press Club (see table 1). The data were instead released at a news conference on a Friday at Bureau Headquarters in Suitland, Maryland. Although the Bureau provided the media and other attendees with bus service from the National Press Club to Suitland, because the data showed that poverty levels had risen, some data users expressed concern that the change in day and location was an attempt to suppress unfavorable information by releasing it at a more remote location and before a weekend, when the public tends to pay less attention to the news. The Income and Poverty Estimates, like other kinds of federal statistical information, provide key measures of the health and well-being of our society. As a result, the data need to be accurate, timely, accessible, relevant, and objective. At the same time, according to NRC, the manner in which agencies release the data is also important, and needs to be free from even an appearance of bias and political manipulation. Failure to meet this goal could undermine public confidence in the information and erode an agency’s credibility. That said, although various guidance and laws have been developed to safeguard the overall quality of federal data, few governmentwide provisions directly address the data dissemination process itself, and agencies have largely been left to develop their own practices. For example, while OMB’s Statistical Policy Directive Number 3, “Compilation, Release, and Evaluation of Principal Federal Economic Indicators” provides detailed guidance on the dissemination of data, it only applies to 38 market sensitive principal economic indicators. Statistical Policy Directive Number 3 is highly regarded in the statistical community because it provides statistical agencies with comprehensive data dissemination guidance, requiring agencies to, among other actions, promptly release data according to an established schedule, and announce and fully explain any schedule changes in advance. Under the Information Quality Act, OMB was required to issue governmentwide guidelines that provide policy and procedural guidance to federal agencies for ensuring and maximizing the quality, objectivity, utility, and integrity of information disseminated by federal agencies. OMB’s guidelines, issued in final form in February 2002, directed agencies covered by the act (statistical agencies and most others) to issue their own quality guidelines. OMB’s guidelines imposed certain core responsibilities on agencies, including incorporating quality into their information dissemination practices. OMB noted that quality consists of several dimensions, including objectivity (which focuses on whether the disseminated information is accurate, reliable, and unbiased in presentation and substance). More generally, OMB helps ensure that the activities of the statistical agencies are in line with federal statistical policy by coordinating agency budget requests and interagency groups working on statistical issues, issuing statistical standards, and reviewing agency requests to collect information. This report is the latest of several studies we have issued on the quality of federal data. See Related GAO Reports at the end of this report for a list of selected products we have issued to date. While not all of the Bureau’s data dissemination practices are documented, we were able to determine through discussions with Bureau officials and review of available documentation, that the Bureau adhered to most of its long-standing data release practices. In changing the date and location of the 2003 and subsequent releases of the Income and Poverty Estimates, the Bureau did depart from its tradition of releasing this information on a Tuesday or Thursday at a news conference at the National Press Club. That said, under the Bureau’s documented data dissemination practices (1) there is no requirement for the Bureau to release this information at a particular location on a given day and, (2) no particular official is designated authority to choose the release date and location. Bureau officials stated that the date of the 2003 release was changed from September 23rd to September 26th for several reasons, including delays in producing a companion report on supplemental measures of expenditures, consumption, and poverty that was to be released at the same time. Also, the 2004 and 2005 estimates were released a month earlier than in prior years to coincide with the release of data from the American Community Survey. The documented practices for disseminating the Income and Poverty Estimates are contained in a memo that is 21 years old so the Bureau is updating them, to among other things, reflect current technology. The Bureau has several sources of documented, agencywide practices for disseminating data to the public. For example, in accordance with OMB’s guidelines for implementing the Information Quality Act, the Bureau developed its own set of quality guidelines that include provisions aimed at ensuring the objectivity and integrity of its data. The Bureau also has a series of data dissemination practices available on its Intranet site and it has issued four standards governing the dissemination of data products, including Dissemination of Census and Survey Data Products. We found that the only documented practices specific to the release of the Income and Poverty Estimates are contained in a 1985 memorandum that was included as one of several appendixes to the Bureau’s Administrative Manual. The manual provides Bureau policy on the release of data and guidance for divisions to follow in responding to requests for such information. The 1985 memorandum, which was signed by the Director of the Census Bureau at the time, includes eight broad steps, covering the process for disseminating the Income and Poverty Estimates. The eight steps include the time period from approval of the report content up to and including distributing the report at the press conference. 1. The Associate Director for Demographic Fields approves the final content of the report prepared by the Population Division after review by the Statistical Methods Division. 2. The Public Information Office receives a copy of the final content to draft a press release. This draft release is approved within the Census Bureau, by the Public Affairs Specialist in the Under Secretary for Economic Affairs’ office, and by the Commerce Department’s newsroom. 3. The report is prepared for camera-ready form. 4. Camera-ready copy is sent to the printer. 5. When the completion time for this report is known, the Census Bureau establishes the release date and time with Commerce Department concurrence. 6. Approximately 48 hours before report release date and time, the Census Bureau briefs the Deputy Secretary for Economic Affairs on the principal findings. 7. The Census Bureau makes the report and accompanying press release available to the media on the established date at 9 a.m. for 10 a.m. release. 8. The Census Bureau distributes the report and press release to the Congress and the OMB at the same time as the media. In releasing the 2003 Income and Poverty Estimates, the Bureau adhered to most of its data dissemination practices. The change in the date and location of the 2003 and subsequent releases of the Income and Poverty Estimates was a departure from the Bureau’s tradition of releasing this information on a Tuesday or Thursday at a news conference at the National Press Club. That said, under the Bureau’s documented data dissemination practices there is no requirement for the Bureau to release this information at a particular location on a given day. Based on our review of available documentation and our interviews with officials involved with the Income and Poverty Estimates, the Bureau followed the steps in the 1985 memo in the 2003, 2004 and 2005 releases, with the exception of the release time as previously described. While the Bureau complied with its documented practices for the dissemination of the Income and Poverty Estimates, they lacked specificity. For example, clear and specific documentation does not exist for how and when the release date and location are to be determined for the Income and Poverty Estimates and who should make those decisions. In actuality, as discussed in greater detail subsequently, in 2003, the Director of the Census Bureau chose the location and the Associate Director for Communications chose the date. However, because this was not thoroughly documented (the 1985 memo only provides general guidance), it is unclear to the public who made these decisions and how they were made. Furthermore, Bureau officials told us that they did not retain any internal memos or e-mails that documented the decision to change the 2003 Income and Poverty Estimates release, which would have provided evidence to support the Bureau’s narrative of the events leading up to the release. Based on our review of available Bureau documents and interviews with key Bureau officials, several factors led to the change in the timing of the release of the 2003 and subsequent Income and Poverty Estimates. The Chief of the Census Bureau’s Housing and Household Economic Statistics Division at the time of the 2003 release of Income and Poverty Estimates, and other senior officials we spoke to, stated that the 2003 Income and Poverty Estimates release was different from years past because the Bureau decided earlier that year to issue the report at the same time as a multi-agency report on supplemental measures of expenditures, consumption, and poverty. This decision was made before the findings of the Income and Poverty Estimates report were known. Bureau officials stated that although the original target date for releasing both reports was September 23, 2003, complications with finalizing the supplemental measures report kept it from being ready for release on that day. According to Bureau officials and documents we reviewed, because the supplemental measures of expenditures, consumption, and poverty report involved several statistical agencies, there was a different clearance process than that used for the Income and Poverty Estimates report. As a result, while the Bureau had completed its review of the latter report, all the members of a steering committee still needed to review the report on supplemental measures. At the same time, based on our discussions with Bureau officials involved with the Income and Poverty Estimates report, as well as available documents, the Commerce Department’s Under Secretary for Economic Affairs wanted to release both reports simultaneously in an effort to broaden the public’s understanding of social well-being. The Under Secretary’s decision was consistent with the Bureau’s ongoing effort to provide alternative estimates of poverty, which itself stemmed from a 1995 report by the National Academy of Sciences that recommended revising how poverty is measured. Because of the additional time required to clear the supplemental measures report, Bureau officials responsible for the Income and Poverty Estimates asked for a later date to issue their report. Consequently, the Bureau’s Associate Director for Communications, with the Director’s consent, scheduled Friday, September 26, 2003, as the release date for the Income and Poverty Estimates, and both reports were issued on that date. Under the Bureau’s guidance for dealing with the media, Census Bureau analysts are to arrange their work schedules to be available for inquiries for 2 to 3 days after a data release. This is why, prior to 2003, the Bureau tended to release the Income and Poverty Estimates earlier in the week: it obviated the need for analysts to work on the weekend. Additionally, Bureau officials said that because of the Internet and cable television, the news cycle is no longer viewed as a cycle and has instead become a 24 hours a day, 7 days a week operation. Thus, many of the media’s inquiries occur the same day as the data are released. While it seldom does so, the Bureau has released other reports on Fridays, such as its 2001 health insurance report. For the 2004 and 2005 releases of Income and Poverty Estimates, the data were released in August at the same time as data from the American Community Survey. Bureau officials reported the Income and Poverty Estimates (which come from the Bureau’s Current Population Survey) are one of several sources of income and poverty information issued by the Bureau. Starting in 2003, the Bureau began releasing income and poverty information from the American Community Survey, which produces data independent from the Current Population Survey. Bureau officials reported that for methodological and other reasons, estimates from the Current Population Survey, in some cases, did not match estimates from the American Community Survey, causing confusing press coverage. In August 2004, when the Bureau released the two data sets at the same time, the press release that accompanied the estimates explained why the two sets of numbers might not match. (According to Bureau officials, the plans to move the release date from September 2004 to August 2004 were in place well before the actual release.) Going forward, the Bureau plans to continue its practice of releasing the American Community Survey data and the Income and Poverty Estimates simultaneously around the last Thursday in August. According to a senior official we interviewed in the Bureau’s Public Information Office, the location of the 2003 Income and Poverty Estimates news conference was changed from the National Press Club in Washington, D.C., at the request of the Director of the Census Bureau, to help raise awareness of the Bureau’s new headquarters building, which was under construction. The groundbreaking ceremony at the new site on the Bureau’s campus in Suitland, Maryland, had taken place several weeks earlier, and a Bureau official reported the Director wanted the media to see the improvements the Bureau was making at its headquarters location, as well as to foster a spirit of good feeling, and highlight how Bureau officials hoped that the new building would help improve the morale of Bureau employees. The Bureau provided bus service for attendees from the original location at the National Press Club in downtown D.C., to Bureau headquarters in Suitland, a distance of around 8 miles. Additionally, according to the Bureau’s Associate Director for Strategic Planning and Innovation, the location of the news conference is no longer as relevant as it once was because of changes in technology. The 2003 news conference was broadcast in real time via the Internet, and materials were made available on the Bureau’s Web site. The Associate Director for Strategic Planning and Innovation noted that because of these advances and accommodations, news media on-location attendance has declined over recent years. Yet, overall media participation has increased via the availability of Web casts, satellite-feed transmissions and telephone-audio access. Consequently, the Suitland, Maryland headquarters is now the primary location for this annual news conference. Because the Bureau did not maintain a written record of the release decision, a precise list of the personnel involved and time line of events is unavailable. However, according to the Bureau officials we interviewed, the following Bureau employees were involved in the process for releasing the Income and Poverty Estimates in 2003: Director of the Census Bureau; Deputy Director/Chief Operating Officer; Chief of the Bureau’s Housing and Household Economic Statistics Assistant Division Chief for Income, Poverty, and Health Statistics; Associate Director for Demographic Programs, now serving as the Associate Director for Strategic Planning and Innovation; Associate Director for Communications; Staff from the Bureau’s Housing and Household Economic Statistics Staff from the Bureau’s Administrative and Customer Services Division; Chief and Deputy Chief of the Bureau’s Public Information Office. Bureau officials said that prerelease access to the Income and Poverty Estimates is tightly controlled because of the possible economic impact of the data. They stated its contents are shared with staff on a need-to-know basis, where only those individuals who are involved with drafting the report or the accompanying press release have access to the information. They noted further that key steps in preparing and releasing the report included the following: 1. Program staff from the Bureau’s Housing and Household Economic Statistics Division drafted the Income and Poverty Estimates report. 2. A branch chief reviewed and approved the draft followed by the Associate Division Chief, the Division Chief, and ultimately the Associate Director for Demographic Programs, who reports to the Bureau Director. These senior officials reviewed the report for such things as clarity and presentation. 3. When the content of the report was finalized, the Bureau’s Public Information Office was sent a copy so it could draft a press release. 4. The final draft was sent to the Bureau’s Administrative and Customer Service Division, which designed the tables and figures, edited the text, and prepared a camera-ready version of the report for printing. According to Bureau officials, the Department of Commerce had only limited access to information from the Income and Poverty Estimates report before it was issued, and Commerce officials played no role in the decision-making process surrounding its release. For example, Commerce’s Office of Public Affairs reviewed the press release that accompanied the report and thus had access to some of the numbers as well as the key findings in the report. However, the office did not have access to any of the tables that are placed on the Internet. (According to the Bureau, Commerce usually provides a “hook” for the news media. In 2003, the press release was issued Friday, September 26, and noted, on the first line, that the nation’s poverty rate rose from 11.7 percent in 2001 to 12.1 percent in 2002.) Moreover, the Bureau considers the press release part of the report and holds it to the same standards for statistical quality as the report itself. Additionally, according to Bureau officials, the Division Chief and the Assistant Division Chief briefed the Director of the Census Bureau on the report about a week before the September 26, 2003, press conference. Commerce’s Under Secretary for Economic Affairs was briefed a day or two before the press conference and the Under Secretary’s staff were provided with the final report at that time. (The report was also provided to the Council of Economic Advisers the afternoon before the press conference.) The then Chief of the Census Bureau’s Housing and Household Economic Statistics Division told us the Bureau is updating its practices for releasing the Income and Poverty Estimates. The official stated that the Bureau was prompted to revisit the 1985 memo by the fact the memo does not include all of the Bureau’s long-standing data dissemination practices; that some of the practices in the 1985 memo are obsolete given the age of the guidance; and the rise of the Internet and other technological advances have had an effect. The official added that the process for releasing Income and Poverty Estimates has become more formalized over time. Bureau officials began drafting these revisions after the 2004 release. In addition to updating the obsolete practices, Bureau officials stated they planned to document the current practice of combining the Income and Poverty Estimates release with the American Community Survey release. The Bureau plans to issue its updated practices prior to the next release of the Income and Poverty Estimates expected in August 2006. Most of the 14 statistical agencies we reviewed reported general adherence to NRC’s guidance, important for (1) the wide dissemination of data, and (2) maintaining a strong position of independence, although there were some notable gaps. OMB, in concert with the statistical agencies, has developed draft guidance on the release and dissemination of statistical products that, according to OMB officials, parallel NRC’s guidance. To the extent it is comparable to NRC’s guidance and other widely accepted procedures for disseminating data, the proposed OMB directive could promote more consistent adherence to practices that promote broader dissemination of statistical data and enhance the data’s credibility. According to NRC, statistical agencies must have “vigorous and well- planned dissemination programs to get information into the hands of users who need it on a timely basis.” Attributes of a good dissemination program include using a variety of mechanisms to inform the widest possible audience about available data products and how to acquire them. Agencies should also have arrangements for archiving the information so that it is available for future use, as well as a publications policy that describes, among other things, the types of data products that will be made available, the frequency of their release, and the audiences they serve. NRC also notes that a statistical agency needs to be politically independent; that is, it “must be impartial and avoid even the appearance that its collection, analysis, and reporting processes might be manipulated for political purposes. . . .” Elements of this practice include having the authority for decisions associated with the scope, content, and publication of the data, as well as the authority for the selection and promotion of professional, operational, and technical staff. As shown in table 2, the data dissemination procedures of the 14 statistical agencies we reviewed included elements that were generally aligned with NRC’s guidance for the wide dissemination of data and maintaining a strong position of independence. Twelve or more of the agencies reported having data dissemination practices possessing four of the five elements related to the wide dissemination of data. All 14 agencies reported their data dissemination practices followed NRC’s guidance for (1) having multiple avenues for disseminating data, (2) releasing data in a variety of formats, and (3) having policies to guide what data should be preserved and how it should be archived. Similarly, 12 or more of the agencies’ dissemination practices had characteristics associated with five of the eight elements corresponding to maintaining a strong position of independence. These elements include (1) adherence to predetermined data release schedules, and (2) authority to make decisions over the scope, content, and frequency of the data compiled, analyzed, or published. A greater number of agencies’ data dissemination practices lacked certain elements important for maintaining a strong position of independence. An example of one of these elements is NRC’s guidance suggesting statistical agencies should have the “authority to release statistical information and accompanying materials (including press releases) without prior clearance by department policy officials” so there is “no opportunity for or perception of political manipulation of any of the information.” However, 10 of the 14 selected agencies reported varying degrees of clearance required by department officials. For example, at 2 agencies, the department rather than the statistical agency releases statistical information. Other agencies have the authority to release statistical information, except for press releases, without departmental clearance, although in some cases, the department’s clearance process is limited to reviewing the grammar, punctuation, and other editorial aspects. (Among the agencies in our review, 11 agencies use press releases; 1 of these 11 agencies first publishes data from all of its major programs via a press release; and the 3 remaining agencies reported they do not use press releases as a vehicle to disseminate data.) With other agencies the clearance process is more involved. For example, one agency said it summarizes the data for the press release making sure it is fair and complete, while officials at the departmental level might insert comments from the cabinet secretary into the release. Further, 6 of the 14 agencies lacked dissemination policies that promote the regular and frequent release of major findings from an agency’s statistical program. As for the Bureau, officials reported that their agency generally adhered to NRC’s recommended guidelines. A notable gap was that Bureau officials did not report adhering to announcing and explaining modifications to a customary release schedule in advance (7b in table 2). Bureau officials also lacked the authority to release statistical information and accompanying materials (including press releases) without prior clearance by department policy officials (8 in table 2). Also, while the Bureau’s established publications policy describes the frequency of release of data collection programs, the Bureau reported the policy does not describe the types of reports to be made available, the data releases to be made available, or the audience to be served (4a-c in table 2). OMB has been working with the federal Interagency Council on Statistical Policy to develop guidance for the release and dissemination of statistical products. According to OMB officials, the guidance is intended to help ensure statistical products are policy-neutral, timely, and accurate. OMB officials told us their directive is similar to the NRC’s recommended practices, as well as to OMB’s Statistical Policy Directive Number 3, which applies only to the 38 market-sensitive principal economic indicators produced by the Departments of Agriculture, Commerce, Labor, and Treasury, as well as the Federal Reserve Board. However, OMB officials told us this new directive will not be as stringent as Statistical Policy Directive Number 3, because the data covered by the directive are released less frequently than the principal economic indicators, and the data are not considered to be market-sensitive. OMB expects to issue the directive for public comment in the spring of 2006. To the extent OMB’s dissemination directive appropriately addresses the principles underlying NRC’s guidance and Statistical Policy Directive Number 3, the directive could enhance the quality and credibility of federal statistical data, in part, by replacing the patchwork of agency- specific guidance with a more transparent, commonly accepted, and consistently applied framework for disseminating data. For example, OMB’s directive could help promote more consistent adherence to key data release practices such as the wide dissemination of data and maintaining an agency’s independent position. As noted in the previous section, the dissemination procedures at several statistical agencies we examined lacked elements important for these practices, including (1) authority to release statistical information without prior clearance by department policy officials, (2) data dissemination policies that foster the frequent release of major findings from an agency’s statistical programs, and (3) an established publications policy that describes the types of reports and other data releases to be made available. As a result, their data products could be better protected, with the directive, from the appearance of, or actual political involvement. More specifically, OMB’s new directive could address how best to address the gaps that exist between agencies’ data dissemination practices on the one hand, and NRC’s guidance on the other. As OMB moves forward with its new directive, our interviews with OMB and statistical agency officials, as well as our past work on data quality guidance and internal control standards, identified the following questions that will be important for OMB’s dissemination directive to consider: Coverage: What will be covered by the directive?—principal statistical agencies only?—the statistical functions of all agencies?—or only statistical products? It will be important for OMB’s directive to clearly define what it does and does not cover so that both statistical agencies and their parent organizations share the same understanding of their respective authorities, and help ensure dissemination procedures are consistently implemented. Certain roles, responsibilities, and processes need to be clarified as well. Indeed, officials at two statistical agencies we spoke with said there is ambiguity as to whether a statistical press release is a statistical product and if so, whether statistical agencies can issue them with or without first getting releases cleared at the departmental level. Additionally, OMB has issued a number of guidelines, directives, and standards on federal statistics. Are there any gaps and overlaps among them, and can they be better integrated? Documentation: To what extent, and how, should agencies document their data dissemination procedures and policies, and how often should they be reviewed and updated? The agencies we examined did not always document their processes for disseminating statistical data, relying instead on professional practice. However, as NRC points out: “Although a long- standing culture of data quality contributes to professional practice, an agency should also seek to develop and document standards through an explicit process.” Moreover, documented guidance would lend more transparency to the data dissemination process, and thus provide a basis for agencies to explain their dissemination decisions to policy makers, news media, and the public. Indeed, an OMB official told us that Statistical Policy Directive Number 3 is a useful tool for explaining to high level policy officials the procedures agencies must follow to maintain the integrity of the data, and why the officials may not access principal economic data before it is released to the public nor comment on it until after its release. Documented guidance could also help ensure continuity in the face of employee turnover. The importance of documenting agencies’ data dissemination practices can be seen in the Bureau’s experience in releasing data from the 2000 Census on the homeless and others without conventional housing, when the Bureau was criticized for shifting its position on reporting components of this population. In our 2003 report, we noted that although the Bureau’s decision stemmed from its concerns over the reliability of the underlying data, the Bureau’s lack of documented, clear, transparent, and consistently applied guidance governing the release of data from the 2000 Census hampered the Bureau in explaining its actions. Had such guidance been in place, it could have helped the Bureau be more accountable and consistent in its dealings with the public, and helped to ensure that the Bureau’s decisions both were, and appeared to be, totally objective. Flexibility: How much leeway should agencies have in implementing OMB’s directive? Agency officials we spoke with noted the different missions of the various statistical agencies and cautioned against a one- size-fits-all approach. As a result, it might not be practical to require all agencies to meet predetermined release dates because it could lead to additional workload burdens and staffing issues. Monitoring: How will OMB ensure agencies comply with its directive? Indeed, the effectiveness of the policies and procedures laid out in OMB’s directive will rest in large part on the extent to which agencies and their parent departments adhere to them. Related questions include whether there should be a regular assessment of agencies’ compliance, and if so, how often should it occur, and whether this should be done by OMB, or by the agencies through a self-assessment. Posting Data: Should agencies’ dissemination policies include written guidance for releasing information via specific channels? Indeed, although NRC’s guidance calls on agencies to disseminate data using a variety of outlets so that the information reaches as wide an audience as possible, should agencies also have a standard set of conduits where the public will know an agency’s data will always be available? Such conduits might include, among others, an agency’s Web site. Because all of an agency’s data products would be, at a minimum, available from a central point of access, it could help strengthen an agency’s credibility because the public would always know where to find it. A key lesson learned from the Bureau’s experience is the importance of fully documented, specific practices for maintaining the integrity of data products, and by extension, the credibility of the agencies that release them. Thus, as the Bureau updates its practices for releasing the Income and Poverty Estimates, it will be important for the Bureau to more thoroughly document its dissemination procedures so they are clear to the public. Further, OMB’s efforts to develop governmentwide guidance on data dissemination is a positive step toward enhancing the credibility of federal statistical data, especially to the extent the directive mirrors NRC’s guidance and Statistical Policy Directive Number 3, as it would replace each statistical agency’s procedures with a more transparent, commonly accepted, and consistently applied framework for disseminating information. As OMB works to complete its directive, it will be important for it to pay particular attention to those elements dealing with the wide dissemination of data and maintaining a strong position of independence that, our survey of statistical agencies suggests can be adhered to by a greater number of agencies. Likewise, OMB should also consider other aspects of agencies’ data dissemination efforts that could make its directive more comprehensive. To help improve the Bureau’s data dissemination practices and thus enhance the agency’s actual and perceived position of independence, we recommend that the Secretary of Commerce direct the Bureau to, as part of its efforts to update its practices for releasing the Income and Poverty Estimates, fully document its key data dissemination practices for releasing the Income and Poverty Estimates. Further, to help improve governmentwide data dissemination practices that would further safeguard the integrity of federal statistical data, we recommend that the Director of OMB ensure his agency, in completing its draft directive on the release of federal statistical products, considers whether and how to address areas where our survey indicates there are gaps between NRC’s existing guidance and agencies’ practices. These areas include the extent to which agencies should have (1) full authority to release statistical information without prior clearance by their respective departments, (2) data dissemination policies that foster the frequent release of major findings from agency’s statistical programs, and (3) an established publications policy describing the types of reports and other releases an agency has available. We are also recommending that the Director of OMB direct his agency to include in its directive additional elements and characteristics important for agencies’ data dissemination practices, including (1) clear definitions of what is, and what is not covered by the directive, (2) the extent to which agencies should document their data dissemination guidance and how often the guidance should be reviewed, (3) the amount of flexibility agencies have in implementing OMB’s guidance, (4) procedures for monitoring agencies’ adherence to its directive, and (5) the feasibility of requiring agencies to distribute data products through a standard set of channels as well as through other outlets as appropriate, so that the public will always know at least one source it can turn to and obtain agency data. In written comments on a draft of this report, Commerce neither agreed nor disagreed with our recommendation for the Bureau to fully document its key data dissemination practices for releasing the Income and Poverty Estimates. However, Commerce reiterated the point we made in our report that the Bureau is updating its practices for releasing the Income and Poverty Estimates. Commerce noted that the updated document—which details the dissemination practices for the Income and Poverty Estimates—is under review. The Bureau plans to issue it prior to the next release of the Income and Poverty Estimates expected in August 2006. Commerce also provided some technical corrections and suggestions where additional context might be needed, and we revised the report to reflect these comments as appropriate. Commerce’s comments are reprinted in their entirety in appendix II. The Director of OMB did not have comments on our recommendations to them. However, OMB officials provided suggestions for technical corrections and we revised the report to reflect these suggestions as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of the report to interested congressional committees, the Director of the U.S. Census Bureau, and the Director of the Office of Management and Budget. Copies will be made available to others on request. This report will also be available at no charge on GAO’s home page at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or farrellb@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix III. To address the extent to which the U.S. Bureau of the Census (Bureau) adhered to its dissemination practices for the release of the 2003 annual Income and Poverty Estimates and subsequent releases we asked Bureau officials (in the Housing and Household Economic Statistics Division and the Bureau’s Public Information Office, among others) to identify the Bureau and Department of Commerce officials who participated in the data dissemination decisions, and interviewed the identified officials to determine their role in the decision-making process, and whether they had prerelease access to the information. We compared their actions to the Bureau’s data dissemination practices. The dissemination process includes the steps from approval of the report content up to and including public distribution of the report. Some of these practices are documented in the Bureau’s Policy and Procedures Manual, while others are undocumented practices that we identified by interviewing cognizant Bureau officials. Because written records of key activities related to the release (e.g. e-mails, meeting agendas and notes) were either not retained or never created, much of our reconstruction of the release was based on interviews with the officials involved. We interviewed many of these officials both as a group (by department) and individually to obtain as complete a picture of the events as possible, and corroborated the information we received from the various parties involved. To assess the extent to which the Bureau and other federal statistical agencies followed data dissemination practices that the National Academy of Sciences’ National Research Council (NRC) recommended in its 2005 report, Principles and Practices for a Federal Statistical Agency, we surveyed officials at 14 federal statistical agencies. (NRC prepared the report to assist statistical agencies in making their products as sound as possible.) Specifically, we surveyed officials at the Bureau, and 13 additional federal statistical agencies to collect information on the procedures they followed when releasing data. These 14 agencies comprise the Interagency Council on Statistical Policy, a body that coordinates federal statistical work and advises Office of Management and Budget (OMB) on statistical matters. The 14 agencies are: 1. Bureau of Economic Analysis, U.S. Department of Commerce 2. Bureau of Justice Statistics, U.S. Department of Justice 3. Bureau of Labor Statistics, U.S. Department of Labor 4. Bureau of Transportation Statistics, U.S. Department of Transportation 5. Bureau of the Census, U.S. Department of Commerce 6. Economic Research Service, U.S. Department of Agriculture 7. Energy Information Administration, U.S. Department of Energy 8. National Agricultural Statistics Service, U.S. Department of Agriculture 9. National Center for Education Statistics, U.S. Department of Education 10. National Center for Health Statistics, U.S. Department of Health and Human Services 11. Office of Environmental Information, Environmental Protection Agency 12. Office of Research, Evaluation, and Statistics, Social Security Administration 13. Science Resources Statistics Division, National Science Foundation 14. Statistics of Income Division, Internal Revenue Service, U.S. Department of the Treasury In surveying the agencies, we reviewed relevant documents such as agency policy manuals, and interviewed key officials who included, depending on the agency, top management officials, chief statisticians, as well as management staff from program, communications, or public affairs offices. We compared the information they provided us to certain practices that the NRC has determined are important to federal statistical agencies in the successful conduct of their missions. Specifically, we focused on two NRC practices (1) wide dissemination of data, and (2) a strong position of independence, because the 13 guidelines or elements associated with these two practices are particularly important for data dissemination. The first practice, the wide dissemination of data, is associated with the mechanics of making the information available to the public, including the media for releasing the information, as well as how it is formatted and archived. The elements of the second practice, a strong position of independence, are essential for maintaining the credibility of statistical agencies, as well as for providing an unimpeded flow of information to data users. To obtain a broader perspective on the governmentwide framework for helping to ensure data quality, we also interviewed OMB officials about OMB’s role in coordinating and overseeing the data dissemination activities and reviewed appropriate OMB documents such as Statistical Policy Directive Number 2 and Number 3. We conducted our work between March 2005 and April 2006 in accordance with generally accepted government auditing standards. In addition to the individual named above, Robert Goldenkoff, Assistant Director, as well as Timothy Wexler, April Thompson, Robert Parker, Jay Smale, Michael Volpe, Andrea Levine, and Amy Rosewarne made key contributions to this report. Information Quality Act: National Agricultural Statistics Service Implements First Steps, but Documentation of Census of Agriculture Could Be Improved. GAO-05-644. Washington, D.C.: September 23, 2005. Data Mining: Agencies Have Taken Key Steps to Protect Privacy in Selected Efforts, but Significant Compliance Issues Remain. GAO-05-866. Washington, D.C.: August 15, 2005. Data Quality: Improvements to Count Correction Efforts Could Produce More Accurate Census Data. GAO-05-463. Washington, D.C.: June 20, 2005. Data Quality: Census Bureau Needs to Accelerate Efforts to Develop and Implement Data Quality Review Standards. GAO-05-86. Washington, D.C.: November 17, 2004. Decennial Census: Methods for Collecting and Reporting Hispanic Subgroup Data Need Refinement. GAO-03-228. Washington, D.C.: January 17, 2003. Decennial Census: Methods for Collecting and Reporting Data on the Homeless and Others without Conventional Housing Need Refinement. GAO-03-227. Washington, D.C.: January 17, 2003. 2000 Census: Refinements to Full Count Review Program Could Improve Future Data Quality. GAO-02-562. Washington, D.C.: July 3, 2002.
In 2003, the Bureau of the Census (Bureau) changed the day and location of the release of its Income and Poverty Estimates. Some data users believed the change was an effort to suppress unfavorable news and questioned the Bureau's data dissemination practices. GAO was asked to assess whether (1) the Bureau adhered to its dissemination practices for the 2003 and later releases, and (2) the Bureau and 13 other federal statistical agencies follow data release practices recommended by the National Research Council (NRC). GAO reviewed the Bureau's dissemination process for the 2003 thru 2005 Income and Poverty Estimates. While not all of the Bureau's data dissemination practices are documented, GAO was able to determine through discussions with Bureau officials and review of available documentation, that the Bureau adhered to most of its long-standing data release practices. However, the Bureau did depart from the traditional day and location for the release of the Income and Poverty Estimates report in 2003 and subsequent years. According to the Bureau, the day of the 2003 release was changed because of a delay in producing a companion report, and the location was changed from Washington, D.C., to Suitland, Maryland, in part, because the Director of the Census Bureau stated that he wanted to raise awareness that the construction of its new headquarters had just started. Some of the Bureau's documented practices, such as guidance on who has authority to choose the release date and location, lacked specificity. Also, the Bureau's documented Income and Poverty practices are outdated as they are contained in a 21-year-old memo. The Bureau is updating it, to among other things, reflect current technology. Most of the 14 statistical agencies in GAO's review generally adhered to NRC's guidance, important for (1) the wide dissemination of data, and (2) maintaining a strong position of independence. Still, there were some notable gaps. For example, 6 of the 14 agencies lacked dissemination policies (as recommended by NRC) that promote the regular and frequent release of major findings from an agency's statistical program. The Office of Management and Budget (OMB), in concert with other statistical agencies, is developing governmentwide guidance on the release and dissemination of statistical products that, according to OMB officials, parallels NRC's and other generally accepted release practices. OMB's guidance could foster more consistent adherence to practices that promote broader dissemination of statistical data and enhance its credibility, especially to the extent they address gaps GAO found between agencies' data dissemination practices and NRC's guidance.
The security situation in Central America has continued to deteriorate in recent years as Mexican drug trafficking organizations, transnational gangs, and other criminal groups have expanded their activities, contributing to escalating levels of crime and violence. Violence is particularly high in the “northern triangle” countries of El Salvador, Guatemala, and Honduras, with homicide rates among the highest in the world. Efforts to counter illicit trafficking in Colombia and Mexico created an environment that became increasingly inhospitable to drug trafficking organizations, forcing criminal groups to displace operations into Central America where they could exploit institutional weaknesses. Recognizing this situation, the United States has sought to develop collaborative security partnerships with Central American countries. As part of this effort, in 2010 the United States split off the Central America portion of the Mérida Initiative and established a new initiative named CARSI. According to State, CARSI is designed as a collaborative partnership between the United States and Central American partner countries. Its focus is on improving citizen security within the region, taking a broad approach to the issues of security beyond traditional counternarcotics activities. Figure 1 shows the CARSI partner countries in Central America. According to State, CARSI’s five primary goals are to disrupt the movement of criminals and contraband to, within, and create safe streets for citizens in the region; between the nations of Central America; support the development of strong, capable, and accountable Central American governments; re-establish effective state presence and security in communities at risk; and foster enhanced levels of security coordination and cooperation among nations in the region. Funding for CARSI activities has come from a combination of four U.S. foreign assistance accounts—the INCLE account; the Economic Support Fund (ESF) account; the Nonproliferation, Anti-Terrorism, Demining, and Related Programs (NADR) account; and the Foreign Military Financing (FMF) account. General descriptions of how these accounts are used globally are provided below. the INCLE account is used to provide assistance to foreign countries and international organizations to assist them in developing and implementing policies and programs that maintain the rule of law and strengthen institutional law enforcement and judicial capabilities, including countering drug flows and combating transnational crime; the ESF account is used to assist foreign countries in meeting their political, economic, and security needs by funding a range of activities, including those designed to counter terrorism and extremist ideology, increase the role of the private sector in the economy, develop effective legal systems, build transparent and accountable governance, and empower citizens; the NADR account is used to fund contributions to certain organizations supporting nonproliferation, and provides assistance to foreign countries for nonproliferation, demining, antiterrorism, export control assistance, and other related activities; and the FMF account is used to provide grants to foreign governments and international organizations for the acquisition of U.S. defense equipment, services, and training to enhance the capacity of foreign security forces. State manages the INCLE, NADR, and FMF accounts, and shares responsibility with USAID to manage and administer the ESF account. Within State, the Bureau for International Narcotics and Law Enforcement Affairs (INL) administers the INCLE account. The Bureau of Political- Military Affairs administers the FMF account, while DOD oversees the actual procurement and transfer of goods and services purchased with these funds. State’s Bureau of International Security and Nonproliferation and its Bureau of Counterterrorism administer their NADR subaccounts. State’s Bureau of Western Hemisphere Affairs administers a portion of ESF. However, USAID oversees the implementation of most CARSI programs funded from ESF. State’s Bureau of Educational and Cultural Affairs also previously administered a onetime use of ESF funds for CARSI activities. State’s Bureau of Western Hemisphere Affairs (WHA) has the lead within State for integrating CARSI activities with State’s broader policy of promoting citizen security in Central America. State’s primary funding source for CARSI activities is the INCLE account and the ESF account is USAID’s primary funding source for CARSI activities. In addition to State and USAID, a number of other U.S. agencies use non-CARSI funding to implement activities in Central America that address various aspects of promoting citizen security that complement CARSI activities—including improving law enforcement and the criminal justice system, promoting rule of law and human rights, enhancing customs and border control, and encouraging economic and social development. DOD, DOJ, DHS, and Treasury are the key agencies involved in these non-CARSI funded activities. Since fiscal year 2008, U.S. agencies have allocated more than $1.2 billion in funding for CARSI activities and non-CARSI funding that supports CARSI goals. As of June 1, 2013, State and USAID had allocated close to $495 million and disbursed at least $189 million in funding for CARSI activities to provide partner countries with equipment, technical assistance, and training to improve interdiction and disrupt criminal networks. As of March 31, 2013, U.S. agencies (State, USAID, DOD, DOJ, and DHS) estimated that they had also allocated approximately $708 million in non-CARSI funding that supports CARSI goals. U.S. agencies, including State, DOD, and DOJ, have used non- CARSI funding to provide additional security-related equipment, technical assistance, and training, as well as infrastructure and investigation assistance to the region. Data on disbursements of non-CARSI funding were not readily available for some agencies because of the complexity and challenges associated with how these agencies track their disbursement data. At the time of reporting, the most recent data available on funding for CARSI were as of June 1, 2013 and the most recent non-CARSI funding data available were as of March 31, 2013. However, we found no change in the total CARSI allocations between March 31, 2013 and June 1, 2013. Thus, it is possible to compare CARSI and non-CARSI funding allocations. As of June 1, 2013, State and USAID had allocated close to $495 million in funding for CARSI activities; the same amount had been allocated as of March 31, 2013, the time frame we use later to report on non-CARSI funding allocations. State and USAID have obligated at least $463 million of the close to $495 million allocated, and have disbursed at least $189 million of the allocated CARSI funds from the INCLE, ESF, and NADR accounts for activities in partner countries. State and USAID disbursed funds to support activities in partner countries that improve law enforcement and maritime interdiction capabilities, support capacity building and training activities, prevent crime and violence, and deter and detect border criminal activity. After reviewing a draft of this report, State officials reported an amount of almost $10.6 million in INCLE funding that was allocated for CARSI activities in fiscal year 2010 that had not been previously reported to GAO. State officials also said that they could not provide obligation or disbursement information related to this amount because these INCLE funds are centrally managed and State’s financial systems do not allow them to track such funds by region or country. According to State officials, this is why these funds were not previously reported to GAO. Although State officials were not able to track the obligation or disbursement of these funds, we have included this amount in the total of the close to $495 million allocated for CARSI activities. Of the seven partner countries, the largest amounts of CARSI funds were allocated to Guatemala, Honduras, and El Salvador. In addition, 17 percent of the total allocations was for regional activities; that is, region- wide activities in Central America that are not tied to an activity in a specific country. Table 1 provides a breakdown of allocated, obligated, and disbursed funds for CARSI activities by country. To demonstrate how funding for CARSI activities has been allocated, obligated, and disbursed by year of appropriation, we are providing this information by account and by country in appendix II. In addition, we present data on how funding for CARSI activities under FMF have been allocated and committed by year of appropriation in appendix III. Since we initially reported on CARSI in January 2013, the amount of funding for CARSI activities disbursed has increased from at least $75 million as of September 30, 2011, to at least $189 million as of June 1, 2013 from the INCLE, ESF, and NADR accounts. According to State officials, disbursements increased because State took steps to alleviate delays associated with program administration in the implementation of CARSI (particularly in the early years), including an insufficient number of staff at embassies in partner countries to manage CARSI activities. For example, in June 2013, the Assistant Secretary of State for INL reported in a congressional hearing that INL had increased staff positions in embassies in CARSI partner countries as INCLE funding represented about 64 percent of total CARSI allocations in these countries. Currently, El Salvador, Guatemala, Honduras, and Panama have INL Sections (formerly known as Narcotics Affairs Sections); and Belize, Costa Rica, and Nicaragua have Narcotics Affairs Offices, according to State officials. State and USAID have 5 years from the time the period of availability for obligation has expired to disburse funds. State and USAID disbursed funds to support various activities in partner countries that improve law enforcement and maritime interdiction capabilities, support capacity building and training activities, prevent crime and violence, and deter and detect border criminal activity. However, there is a slight difference in emphasis between State and USAID in their CARSI-funded activities. State’s efforts focus on capacity building of partner countries, while USAID’s efforts focus on establishing prevention programs for at-risk youth in partner countries. In general, State uses INCLE, ESF, FMF, and NADR funds to support activities such as strengthening the abilities of Central American law enforcement institutions to fight crime, violence, and trafficking in drugs and firearms; implementing high-impact, sustainable activities that focus on at-risk youth (such as job training and after school activities) and communities that are experiencing high levels of crime and violence; preventing the proliferation of advanced conventional weapons by helping to build effective national export control systems in countries that process, produce, or supply strategic items, as well as in countries through which such items are most likely to transit; and building and improving partner nation security force capacity to protect maritime borders and land territory against transnational threats such as illicit narcotics trafficking. USAID uses ESF funds for CARSI activities in the following areas: services for at-risk youth, focusing on vocational training, job placement, after-school activities, community centers, and leadership development; municipal crime prevention activities, including community outreach for local police and support for crime observatories that coordinate data sharing to track crime statistics; and national and regional political reform activities to strengthen rule of law institutions and that reflect partner countries’ commitments to reduce violence while creating the environment needed to institutionalize and sustain USAID efforts under CARSI. Across the region, State and USAID use various CARSI-funded activities to carry out CARSI goals in each of the seven partner countries. Funding for CARSI activities provides partner countries with communication, border inspection, and security force equipment such as radios, computers, X-ray cargo scanners, narcotics identification kits, ballistic vests, and night-vision goggles. Funding for CARSI activities also provides related maintenance for this equipment. Figure 2 below shows examples of crime investigation forensic equipment and vehicles provided with funding for CARSI activities to the Belize Police Department. In addition, funding for CARSI activities provides technical support and training to enhance partner countries’ prosecutorial capabilities; management of courts, police academies, and prisons; and to support law enforcement operations (e.g., training to support narcotics interdiction). Funding for CARSI activities also provides support to partner countries to that form specialized law enforcement units (also known as vetted units)are vetted by, and work with, U.S. personnel to investigate and disrupt the operations of transnational gangs and trafficking networks. Moreover, CARSI provides funding for partner countries to establish prevention activities designed to address underlying conditions (such as insufficient access to educational or economic opportunities and the prevalence of gangs) that leave communities vulnerable to crime and violence. Table 2 provides examples of CARSI activities in the seven partner countries. As of March 31, 2013, U.S. agencies estimated that they had allocated approximately $708 million in non-CARSI funding that supported CARSI goals from fiscal year 2008 through the first half of fiscal year 2013, with State, USAID, and DOD allocating the largest amount of non-CARSI funds to support CARSI goals. U.S. agencies (State, USAID, DOD, DOJ, and DHS) reported using their non-CARSI funding to implement a range of activities that supported CARSI goals, including providing training, technical assistance, equipment, infrastructure, and investigation and operational support to partner countries. To estimate the amount of non-CARSI assistance that has been allocated for partner countries that supported CARSI goals, we collected data from State and USAID as well as DOD, DOJ, DHS, and Treasury for fiscal year 2008 through the second quarter of fiscal year 2013. We did not report data on disbursements of non-CARSI funding because these data were not readily available for some agencies owing to the complexity and challenges associated with how these agencies track their disbursement data. The allocated amount of non-CARSI funding supporting CARSI goals was 43 percent greater than the allocated amount of funds for CARSI activities, as of March 31, 2013. The largest share of non-CARSI funding was allocated to Honduras, Guatemala, and El Salvador, as shown in table 3. According to State officials, the U.S. government has identified CARSI as its primary initiative for addressing citizen security threats in Central America. U.S. agencies developed an interagency strategy to ensure an integrated approach to all U.S. citizen security activities in Central America whether funded through CARSI or other sources. Established in 2012, the strategy sets up CARSI and its five goals as the national policy framework for all U.S. government citizen security efforts in Central America and states that agencies’ activities in the region should link to one or more of the CARSI pillars. Agency officials noted that because the goals of CARSI are broad, a wide array of activities can be seen as supporting the goals, and agencies have sought to align their own strategy documents with the interagency strategy and five pillars of CARSI. Officials from some U.S. agencies, including DOD and the Drug Enforcement Administration (DEA), noted that the CARSI goals reflect the types of activities that their agencies were already undertaking in the region. The largest shares of non-CARSI funds allocated are from State, USAID, and DOD (see table 4). U.S. agencies reported using their non-CARSI funding to implement a range of activities that supported CARSI goals, including providing training, technical assistance, equipment, infrastructure, and investigation and operational support to partner countries. For example, State funds complementary activities from a variety of non-CARSI sources, including security assistance accounts such as the International Military Education and Training account; other foreign assistance accounts, such as the Democracy Fund; and non-foreign assistance sources, such as the Conflict Stabilization Operations account. State identified 11 offices that support complementary citizen security activities in Central America with non-CARSI funds. For example, according to State officials, State’s Bureau of Conflict and Stabilization Operations funded mediation and community dialogue activities in Belize to reduce gang violence that complemented a related CARSI-funded activity. State’s Bureau of Political-Military Affairs used non-CARSI FMF funding to provide boats to Panama’s Coast Guard to assist in conducting drug interdictions in Panama’s territorial waters. USAID used non-CARSI Development Assistance funds to support a variety of activities in the rule of law and human rights, good governance, political competition and conflict resolution, and education areas. For example, USAID is using non-CARSI Development Assistance funds in Guatemala to help strengthen its security and justice sector institutions, according to USAID officials. In addition, DOD, DHS, and DOJ also use funding other than CARSI to implement activities in Central America that support CARSI goals. For example, according to officials, DOD has used funds from its Central Transfer Account for Counternarcotics to help establish an interagency border unit along the Guatemala/Mexico border to support Guatemalan efforts to stop the illicit movement of people and contraband. In Panama, a DOD medical team used non-CARSI funds to work with the Panamanian Ministry of Health in a poor and remote area in Panama to provide medical attention to this community. In Belize, DOD used non-CARSI funds for equipment, training, and infrastructure, including construction of a Belize Coast Guard Joint Operation Center that houses drug interdiction boats provided with funds for CARSI activities (see fig. 3). DHS and its components used non-CARSI funding to support activities such as training by Customs and Border Protection (CBP) on how to conduct searches and seizures at ports of entry that complemented other types of CARSI support. DOJ and its components used non-CARSI funding to support a variety of activities designed to improve partner countries’ law enforcement capabilities. For example, DEA provided funding to support vetted Sensitive Investigative Units in Guatemala, Honduras, and Panama. While not included in our reported non-CARSI allocation totals above, U.S. agencies also used other non-CARSI resources to support CARSI goals in ways other than directly funding activities in partner countries. For example, Treasury has used non-CARSI funding to pay for the salaries and other costs associated with posting its personnel in several partner countries to serve as resident advisors. These advisors work with the partner countries to improve their ability to detect and prevent money laundering and have used funding for CARSI activities to implement regional programs. In addition, the FBI’s Criminal Investigations Division has not directly funded non-CARSI activities in partner countries; however, it has assigned personnel to Transnational Anti-Gang Units that have been set up in El Salvador, Guatemala, and Honduras. Agencies such as CBP, the U.S. Coast Guard, and DOD also support CARSI goals by using their assets, including aircraft and boats, to conduct counternarcotics operations in Central America. For example, U.S. agencies contribute resources to Operation Martillo, which is a joint counternarcotics operation involving the U.S. government, several partner countries, and other international partners. When selecting activities to fund under CARSI, State and USAID took steps to help identify and consider partner country needs, absorptive capacities, and related U.S. and non-U.S. citizen security assistance investments in partner countries. First, State and USAID officials used assessment reports to help identify and consider partner country needs and absorptive capacities. Second, State and USAID officials used outreach meetings with officials from partner country governments, other donor governments, and international organizations to consider partner country needs, absorptive capacities, and non-U.S. citizen security assistance investments in partner countries. Third, State and USAID officials used interagency meetings at embassies in partner countries and in Washington, D.C., to coordinate U.S. efforts, as well as to help identify and consider partner country needs, absorptive capacities, and related non-U.S. investments in partner countries. State officials used assessment reports to help identify and consider partner country needs and absorptive capacities when selecting activities to fund under CARSI. For example, State conducted reviews of the forensic capabilities in six partner countries over the course of 2011 to evaluate the crime scene investigation, prosecution, and forensic science programs and capacities in each country. In a 2011 report, State assessed deficiencies in these areas and developed recommendations to address those deficiencies. According to State, State officials used the conclusions and recommendations from this report to inform their decisions on selecting activities to fund under CARSI. State officials also reported that they used assessment reports produced by interagency partners to determine assistance needs, refine assistance efforts, and avoid absorptive capacity issues. For example, State officials used a series of technical assessment reports on the law enforcement and interdiction capabilities and needs of key Central American land ports of entry produced by CBP. Similarly, State officials reported that they used comprehensive assessment reports on the firearms regulations, oversight, investigative, and forensic capabilities of Central American governments produced by the Bureau of Alcohol, Tobacco, Firearms, and Explosives to determine that firearms interdiction activities could assist in reducing the trafficking of arms into the region. USAID officials also used assessment reports to help identify and consider partner country needs and absorptive capacities when selecting activities to fund under CARSI. For example, USAID officials reported that they used assessment reports to help identify and consider partner country juvenile justice and community policing needs and absorptive capacities; these assessment reports included specific recommendations for designing and selecting juvenile justice and community policing projects in partner countries. According to USAID officials in Washington, D.C., and at U.S. embassies, USAID staff used information from these and other assessment reports to help select and design CARSI activities in partner countries. In addition, both State and USAID officials used country-specific CARSI assessment reports—produced by embassy staff in November 2009 and covering all seven partner countries—to help identify and consider partner country needs and absorptive capacities when selecting activities to fund under CARSI. These country-specific assessment reports included information on (1) the partner country’s security environment, (2) embassy and host government perspectives on the effectiveness of activities implemented to date, (3) partner country strengths and weaknesses and opportunities and threats, and (4) the partner country’s regional and bilateral security engagements. State and USAID officials also used outreach meetings with host government officials to help identify and consider partner country needs, absorptive capacities, and non-U.S. citizen security assistance investments in partner countries when selecting activities to fund under CARSI. Outreach meetings included both routine interactions between U.S. agency and host government officials—at the subject matter expert level—and broader, high-level meetings, typically at the ambassador and head of host government level. At these meetings, topics such as the status of current CARSI activities and the future of CARSI programming, including potential future CARSI activities, can be discussed. For example, embassy officials in one partner country reported that they held an ambassador/head of host government-level meeting with a delegation from the host government in June 2010. At this meeting, the U.S. government and the host government agreed to pursue bilateral, multiagency efforts to combat identified threats from transnational illicit trafficking and criminal organizations. Following this high-level meeting, embassy and host government officials established bilateral working groups to identify and develop activities in the partner country in areas such as border security, counternarcotics operations and strategy, gang prevention and law enforcement, community development, asset seizure, and investigation and prosecution. These bilateral working groups provided input on selecting activities to fund under CARSI and are now coordinating information-sharing efforts and progress updates on those activities. State and USAID officials also used outreach meetings with other donor governments and international organizations to help identify and consider non-U.S. citizen security investments in partner countries when selecting CARSI activities. For example, in one partner country, embassy officials reported that they held numerous meetings with other donor governments. Through these outreach meetings, embassy officials were able to identify one donor government’s investments in police intelligence in the partner country and consequently reduced funding for CARSI activities in that area. Also, through regular outreach meetings, embassy officials in the same partner country reported that they were able to identify another donor government’s investments in ballistic imaging systems in the partner country. Embassy officials subsequently redirected funding for CARSI activities that would otherwise have been spent in that area. State and USAID officials also used meetings with other donor governments through the Group of Friends of Central America’s Security Experts Group to help identify and consider non-U.S. citizen security assistance investments in partner countries when selecting activities to fund under CARSI. For example, through Group of Friends and other donor meetings, State reported that they worked with another donor government to coordinate an anti-crime capacity-building activity for a partner country by de-conflicting donor purchases and leveraging investments between the U.S. and the other donor government. In addition, both USAID and State reported that they utilized a donor database on third-country and multilateral assistance hosted by the Inter- American Development Bank (IDB) to help identify and consider non-U.S. investments. The database includes information on projects sponsored by other donors and international organizations in partner countries, such as when the project started, when it is scheduled to be completed, and the total project cost. State officials said that they are eager for IDB to update the database with more detailed donor information that could increase the effectiveness of U.S. agencies’ efforts to coordinate with other donors. When selecting activities to fund under CARSI, State and USAID officials also used interagency meetings at embassies in all seven partner countries to coordinate U.S. efforts, as well as to help identify and consider partner country needs, absorptive capacities, and related non- U.S. investments in those partner countries. For example, embassy officials in one partner country reported that they used interagency meetings to discuss the partner country’s needs for a digital radio communication network to connect the host government’s police, military, and related agencies and the ability of the partner country to absorb such assistance. State and DOD officials used information from the interagency meetings to help design and select a digital radio communication project using both CARSI and non-CARSI funding. According to agency officials, by involving DOD in the project selection process, embassy officials leveraged DOD’s contribution to help meet the partner country’s needs and help the partner country conduct joint operations with the United States. In another partner country, embassy officials reported that they used interagency meetings to identify and consider partner country needs, absorptive capacities, and related U.S. agency non-CARSI investments to support the host government’s efforts to regain control over a conflict-ridden portion of the country. According to agency officials, through the interagency meetings, U.S. agencies identified and considered these factors and coordinated the use of CARSI and non-CARSI funding to support the host government’s efforts. State and USAID officials also reported that they used high-level interagency meetings, such as those of the Central America Interagency Working Group (IAWG) in Washington, D.C., to help identify and consider partner country needs and coordinate related U.S. agency non-CARSI investments in partner countries when selecting activities to fund under CARSI. The IAWG was launched in February 2012 and includes representatives from State and USAID, as well as representatives from other agencies engaged in citizen security efforts in Central America, including DHS, DOD, DOJ, and Treasury. According to State, from March 2012 through April 2013, the IAWG and its associated subgroups held 21 meetings. Through interagency meetings, State officials were able identify and consider non-CARSI proposed investments when selecting activities to fund under CARSI; for example, according to State, officials identified and considered non-CARSI proposed border management and migration projects for the region. State officials coordinated the disbursement of CARSI and non-CARSI funds to support the implementation of these border management and migration projects, while avoiding duplication among activities. In addition, through interagency meetings, agency officials were able to review various CARSI and non-CARSI land border security and interdiction activities and identified land border security short-to-medium-term capacity deficits. Consequently, agency officials are working to focus U.S. land interdiction security assistance on a limited number of high-impact engagements designed to increase seizures of contraband. By continuing to coordinate CARSI and non-CARSI investments through these interagency meetings, State officials said they will produce a more coordinated and integrated U.S. response to the region, with the goal of increasing seizures of contraband and supporting partner country border security initiatives. Using various mechanisms, State and USAID have reported on some CARSI results at the initiative, country, and project levels. For example, embassies in partner countries produce monthly CARSI implementation reports that identify the impacts of CARSI or related activities in the country. However, U.S. agencies have not assessed or reported their performance using the metrics outlined in a 2012 interagency strategy for Central America that are designed to measure the results of CARSI and complementary non-CARSI programming. USAID is currently implementing an evaluation of selected CARSI activities and State is planning an evaluation of some of its CARSI activities. State and USAID monitored and reported on some CARSI results through a variety of mechanisms at the initiative, country, and project levels. Initiative-level reporting addresses CARSI results across the different CARSI accounts and the seven partner countries. Country-level reporting describes CARSI results in a particular partner country. Project-level reporting describes the results of individual CARSI projects. According to State and USAID officials, the primary source of consolidated information on CARSI results at the initiative level—across accounts and countries—is State’s Bureau of Western Hemisphere Affairs’ (WHA) annual Performance Plan and Report. State and USAID use the annual Performance Plan and Report to monitor the performance of foreign assistance activities in the region. In its 2012 report, WHA provides information on some CARSI-wide results using a number of performance metrics that measure outputs against WHA’s established targets. For example, WHA uses metrics such as narcotics seizures and the establishment of local crime prevention groups to measure CARSI results. To produce the information on CARSI results in the report, WHA aggregated data on activities funded through all CARSI accounts and in all seven partner countries. We do not provide more detailed information on the CARSI results discussed in the 2012 Performance Plan and Report because the document is labeled “Sensitive But Unclassified.” specific metric, State did not establish a fiscal year 2012 target against which to measure CARSI results. WHA noted that in its fiscal year 2012 report there are eight additional metrics that included combined results information on CARSI and other initiatives in the Western Hemisphere, but these metrics did not provide separate results information for CARSI- funded activities. For example, WHA reported that CARSI and other initiatives in the region together exceeded their target for a metric related to the training of foreign law enforcement officers by almost 75 percent in fiscal year 2012. State and USAID also report on CARSI results at the country level. According to State and USAID officials, monthly CARSI implementation reports produced by the embassies in each partner country are one of the key ways in which they monitor and report on CARSI results at the country level. State and USAID officials stated that these implementation reports are part of their ongoing effort to monitor the impact and effectiveness of CARSI and related non-CARSI assistance. State requires embassies to include in the reports a section discussing the impact of CARSI and related activities. These impact sections do not provide information on performance relative to established CARSI metrics or specific goals, but instead consist of descriptions of the results of various activities taking place in the partner countries over the course of the month. For example, one embassy reported in May 2013 that the host government used a body scanner purchased with CARSI funds to successfully detect a man attempting to smuggle narcotics onto a plane bound for the United States. A different embassy reported in April 2013 that a CARSI-supported anti-gang education and training program had been successfully expanded nationwide and had taught over 3,000 children over 3 years of the program. The 55 monthly reports we reviewed included a range of other results from CARSI-funded activities that were identified by embassies, but we also found that some embassies did not always link some of the reported results to specific U.S. assistance activities. For example, a number of reports noted seizures or arrests made by the host government, but the reports did not provide any information on how CARSI or related U.S. non-CARSI assistance had facilitated these efforts. State officials identified INL’s annual end-use monitoring reports as a second mechanism for monitoring and reporting on CARSI activities at the country level, although these end-use monitoring reports are not specific to CARSI. State officials said that these end-use monitoring reports are used to monitor all INCLE-funded items that have been provided to the partner country to ensure that items are accounted for and used in accordance with the terms agreed to by the U.S. government and the partner country. As part of the end-use monitoring reports, State requires embassy officials to include a discussion of the impact of any INCLE-funded equipment, infrastructure, training, or other services that have been provided, including under CARSI. The reports from partner countries for fiscal years 2009 through 2012 identified a number of positive results from CARSI assistance. For example, the embassy in El Salvador stated in its 2012 end-use monitoring report that trucks provided to the national police had a significant impact on the number of cases investigated and improved the national police’s response capabilities. However, the reports also identified some issues related to upkeep, maintenance, and use of CARSI-funded equipment. For example, the embassy in Guatemala reported in 2012 that 11 motorcycles provided to the National Police became inoperable as a result of a lack of proper maintenance and funding; State then covered the cost of refurbishing the motorcycles. Finally, USAID officials noted that annual portfolio reviews conducted by USAID missions in partner countries are an important tool for reporting CARSI results at the country level. USAID first began requiring its missions to conduct such reviews in November 2012. According to USAID guidance, portfolio reviews should, among other things, examine the mission’s progress in achieving its objectives over the past year. The portfolio reviews that we examined included varying levels of information about CARSI results. For example, one review did not provide any results information, but instead provided a general description of the types of activities funded under the USAID mission’s portfolio. However, in other cases, the USAID missions did provide specific results information. For example, one mission reported that one of its programs had provided access to vocational training to improve job competitiveness for 1,763 young people either at risk of becoming gang members or trying to leave gangs. In some cases, the portfolio reviews did not specify whether certain results were from CARSI or related non-CARSI projects. State and USAID officials also stated that they perform certain monitoring and reporting on CARSI results at the project level. State’s INL conducts quarterly desk reviews of INCLE-funded CARSI activities to track the progress of projects over time. INL requires these quarterly desk reviews to include a discussion of the project objectives, measure project results against established performance metrics, and identify success stories. For example, INL reported in the quarterly desk review for one CARSI project that, as of the end of 2012, it had trained 259 host government investigators, prosecutors, and judges on the use of forensic evidence in court proceedings. In another quarterly desk review, INL reported that the project implementer had successfully developed an improved case management system to assist the Costa Rican Attorney General’s Office in conducting drug trafficking prosecutions. USAID also conducts quarterly reporting on its CARSI projects. USAID’s quarterly reports include information on the project’s accomplishments for the quarter and progress that had been made relative to the project’s established performance metrics. For example, in a report for the second quarter of fiscal year 2013, the implementer of USAID’s crime prevention program in Panama reported that it had met or exceeded its targets for 20 of the project’s 26 metrics, including its target for the number of municipalities that had set up municipal crime prevention committees. While State and USAID have reported on some CARSI results, U.S. agencies have not assessed and reported on their results using the performance metrics identified in the February 2012 interagency citizen security strategy for Central America. U.S. agencies developed this strategy to help coordinate and focus the U.S. government’s CARSI and related non-CARSI activities in the region. In the interagency strategy, U.S. agencies outlined five metrics for measuring the performance of U.S. government citizen security programming, including CARSI activities, in achieving the strategy’s objectives. For example, the strategy includes a metric to reduce homicide rates each year from 2012 through 2017. According to State and USAID officials, the strategy and the metrics it identifies were developed through an iterative, interagency process that included other agencies such as DOD, DOJ, and DHS. However, to date, U.S. agencies have not assessed and reported on their performance using the metrics identified in the strategy. GAO-12-1022. that have proven their effectiveness in solving the region’s most pressing problems. USAID is currently conducting an evaluation of some of its CARSI activities, and State is developing an evaluation of INL activities under CARSI, consistent with its evaluation policy. USAID and State have both taken steps to monitor and report on the results of CARSI-funded activities. However, in our previous work we concluded that monitoring activities do not take the place of program evaluations. As we previously concluded, monitoring is ongoing in nature and measures agencies’ progress in meeting established objectives, typically using performance metrics. Evaluations are individual, systematic studies that typically examine a broader range of information on program performance and its context than is feasible to monitor on an ongoing basis. Thus, evaluations allow for overall assessments of whether a program is working and what adjustments need to be made to improve results. USAID officials stated that they will also conduct evaluations of other CARSI activities that meet the criteria established in USAID’s 2011 evaluation policy. USAID’s evaluation policy requires each USAID operating unit to evaluate all projects that equal or exceed the average project size for that operating unit, at least once during the project’s lifetime. preliminary results from El Salvador show that murder and robbery rates have been reduced in communities receiving USAID assistance under the program. USAID officials identified a range of ways that they expect the crime prevention programming evaluation to assist them, once it is completed. For example, they expect the evaluation to provide evidence of the extent to which USAID’s crime prevention program reduced crime victimization and perceptions of insecurity in at-risk communities. USAID officials also anticipated that they would be able to use the evaluation’s findings as a tool to encourage partner countries to make their own investments in crime prevention activities. State officials noted that they are currently working on a scope of work for an evaluation of CARSI activities. In 2012, State issued an evaluation policy that requires bureaus to evaluate two to four programs, projects, or activities every 2 years, starting in fiscal year 2012, with all “large” programs, projects, and activities required to be evaluated at least once in their lifetime or every 5 years, whichever is less. The policy also requires all State bureaus to complete a bureau evaluation plan and to update it annually. According to State officials, given other priority areas, INL did not select CARSI for evaluation in its first bureau evaluation plan, covering fiscal years 2012 through 2014, although CARSI qualifies as a large program for INL. Nevertheless, INL officials stated that they intend to conduct an evaluation of their CARSI activities beginning in fiscal year 2014, as CARSI approaches its 5-year point. INL officials stated that they are currently working on a scope of work for this evaluation, which will cover CARSI programming across the partner countries. INL officials stated that their intention is to issue a solicitation by the end of 2013 for a contractor to conduct the CARSI evaluation. However, INL officials noted that many decisions have not yet been made about the scope or methodology for the evaluation and that funding has not yet been secured for the evaluation. In regard to WHA, State officials noted that the bureau manages only a small percentage of State’s funding for CARSI activities. Given the small percentage of CARSI funding WHA manages, State officials said that WHA does not have any plans to conduct a separate CARSI evaluation from the one INL intends to do. Our guidance on evaluation design indicates that State could increase the value of any future evaluation it conducts by ensuring that it systematically plans the evaluation. As we have previously concluded, systematically planning for evaluations is important to (1) enhance the quality, credibility, and usefulness of evaluations and (2) use time and In our earlier work on evaluation design, we resources effectively.recommended that agencies take five steps to effectively design an evaluation, as shown in table 5. Evaluations of CARSI activities, such as the one that INL has stated it intends to undertake, could provide State with important information to help it manage and oversee CARSI. As State’s evaluation policy notes, evaluations are essential to documenting program impact and identifying best practices and lessons learned. Among other things, an evaluation could help State as it seeks to identify successful CARSI activities and determine how best to replicate them in other locations. State officials noted that designing a CARSI evaluation will be challenging because CARSI involves a diverse set of activities that are being implemented in seven different countries. Thus, State officials stated that one challenge they will face in evaluating CARSI is selecting a mix of activities to evaluate that are sufficiently representative of their various CARSI activities that conclusions can be drawn about the broader impact of their CARSI efforts. Given such challenges, effectively planning any CARSI evaluation would help State ensure that the evaluation provides the types of information it can use to guide future decisions about CARSI programming. CARSI partner countries face significant challenges that threaten the security of their citizens as well as the interests of the United States. U.S. agencies have allocated over $1.2 billion to support a range of activities to help partner countries respond to these threats. While State and USAID have reported on some results from CARSI-funded activities, the agencies have not worked with their interagency partners to assess progress made in meeting performance targets outlined in the 2012 U.S. interagency citizen security strategy for Central America. Without assessing their performance meeting these targets, agencies lack important information on progress made toward achieving the objectives outlined in the interagency strategy that could help guide future decisions. To evaluate some of its CARSI activities, USAID is currently overseeing an evaluation of its CARSI crime prevention programming and intends to use the evaluation to help it better target, design, and prioritize future CARSI programming. State is planning an evaluation of some of its CARSI activities as the initiative approaches its 5-year mark. These evaluations will help agencies better manage and oversee their programs and activities. Among other things, the evaluations can be used to (1) help agencies assess the effectiveness of completed activities, (2) modify the current mix of existing projects to increase program effectiveness, and (3) better prioritize future projects to achieve results. While these are commendable steps, assessing progress made toward achieving the objectives outlined in the U.S. interagency strategy for Central America would provide important information on the performance of CARSI and related U.S. government activities and better guide U.S. decision making. To help ensure that U.S. agencies have relevant information on the progress of CARSI and related U.S. government activities, we recommend that the Secretary of State and the USAID Administrator direct their representatives on the Central America Interagency Working Group to work with the other members to assess the progress of CARSI and related U.S. government activities in achieving the objectives outlined in the U.S. government’s interagency citizen security strategy for Central America. We provided a draft of this report to DHS, DOD, DOJ, State, Treasury, and USAID for their review and comment. DHS, State, and USAID provided technical comments, which we incorporated as appropriate. USAID and State also provided written comments, which are reproduced in appendixes IV and V, respectively. In their written comments, State and USAID both concurred with our recommendation and State noted that GAO’s recommended steps for evaluation design would guide an evaluation of CARSI programming. As discussed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to DHS, DOD, DOJ, State, Treasury, and USAID, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This report (1) provides an updated assessment of U.S. agencies’ funding and activities that support Central America Regional Security Initiative (CARSI) goals; (2) examines whether U.S. agencies took steps to consider partner country needs, absorptive capacities, and related U.S. and non-U.S. investments when selecting activities to fund under CARSI; and (3) examines information on the extent to which U.S. agencies reported CARSI results and evaluated CARSI activities. To assess U.S. agencies’ funding and activities that supported CARSI goals, we obtained data and program documentation from the Department of State (State) and the United States Agency for International Development (USAID) concerning funds allocated to support programs in Central American countries under the Mérida Initiative in fiscal years 2008 and 2009 and under CARSI from fiscal year 2010 to June 1, 2013, through four accounts—International Narcotics Control and Law Enforcement (INCLE); Economic Support Fund (ESF); Nonproliferation, Anti-terrorism, Demining, and Related Programs (NADR); and Foreign Military Financing (FMF). We obtained the data from each bureau at State that administers those accounts: International Narcotics and Law Enforcement Affairs (INL), Western Hemisphere Affairs (WHA), International Security and Nonproliferation, Counterterrorism, and Political-Military Affairs. We also obtained data from USAID, which also allocates and implements the ESF account. In particular, State and USAID provided data on the status of allocations, unobligated balances, unliquidated obligations, and disbursements for the ESF account; State also provided these data for the INCLE and NADR accounts. State’s bureaus and USAID administer the accounts separately and utilize their own data collection systems and budgeting terms. To address differences between their systems, we provided State and USAID with the definitions from GAO’s A Glossary of Terms Used in the Federal Budget Process and requested that State and USAID provide the relevant data according to those definitions. To the extent possible, we worked with agencies to ensure that they provided data that met these definitions. However, the Department of Defense budgets and tracks FMF funds in a different way than the other foreign assistance accounts that support CARSI. The Defense Security Cooperation Agency (DSCA) and the Defense Financing and Accounting Service (DFAS) are responsible for the financial systems that account for FMF funds, as well as tracking the implementation and disbursement of those funds. DSCA’s system can only track FMF uncommitted and committed amounts, not unliquidated obligations or disbursements. DFAS tracks disbursements using the Defense Integrated Finance System; however, there is no direct link between the DSCA and DFAS systems and the DFAS system does not track funding for specific initiatives, such as CARSI. Therefore, State was not able to provide data on unliquidated obligations or disbursements, but it was able to provide us with data on CARSI FMF allocations and commitments. In providing technical comments on a draft of this report, State officials reported an amount of close to $10.6 million in additional INCLE funding that was allocated for CARSI activities in fiscal year 2010 that had not been previously reported to GAO. State officials also said that they could not provide obligation or disbursement information related to this amount because these INCLE funds are centrally managed and State’s financial systems do not allow them to track such funds by region or country. According to State, that is why these funds were not previously reported to GAO. We followed up with State officials to confirm that the funds had been applied to CARSI activities and to document the programs toward which the funds had been applied. Although State officials were not able to provide information for the obligation or disbursement of these funds, we have included this amount in the total allocated for CARSI activities. We made note of this discrepancy in presenting this data in the report. We also interviewed officials from each of State’s bureaus and USAID on their budgeting process and terms to determine the best method for collecting comparable data across accounts. We then reviewed the data and consulted with State and USAID on the accuracy and completeness of the information. When we found discrepancies, we contacted relevant agency officials and worked with them to resolve the discrepancies. We noted any differences in the ways the agencies collected, categorized, or reported their data in notes to the tables in this report. To assess the reliability of the data provided, we requested and reviewed information from agency officials regarding the underlying financial data systems and the checks, controls, and reviews used to generate the data and ensure its accuracy and reliability. We determined that the data provided were sufficiently reliable for the purposes of this report. Furthermore, to identify equipment, training, and other related activities supported by funding for CARSI activities, we reviewed program documentation and interviewed relevant officials from State and USAID regarding the status of program implementation and the types of equipment, training, and other activities provided to partner countries to date. In addition, we visited three partner countries—Belize, Guatemala, and Panama. We selected these three countries as a sample considering the following elements—the scope of the citizen security problem; the amount of funding for CARSI activities received from fiscal years 2008 to 2012, the range of CARSI activities undertaken, the extent of non-CARSI U.S. government activities that support CARSI objectives, and the extent of host government or other donor citizen security efforts in these countries. In these three countries, we met with U.S. agency officials as well as host government, international organization, and other donor government officials. We also visited CARSI and non-CARSI activity locations during these visits. To determine how much non-CARSI assistance has been allocated for partner countries that supported CARSI goals, we collected data from State and USAID as well as DOD, the Department of Justice, the Department of Homeland Security, and the Department of the Treasury for fiscal year 2008 through the second quarter of fiscal year 2013. Data on disbursements of non-CARSI funding were not readily available for some agencies because of the complexity and challenges associated with how these agencies track their disbursement data. In collecting allocation data, we asked agencies to provide funding data only for activities that they determined supported one or more of the five pillars of CARSI. In addition, we asked agencies to provide only data on non-CARSI funding that directly assisted partner countries, such as funding for training, equipment, infrastructure, and operational or investigative support. To avoid double-counting across agencies, we asked agencies to provide data only on activities funded through their own appropriations. We requested non-CARSI data from all the agencies in a standardized format, but given differences in the agencies’ missions, budget processes, and data systems, there were variations in the responses we received. We worked with the agencies to resolve these discrepancies. For example, some agencies provided data on funding for the salaries of U.S. government employees, or the operation of U.S. equipment, such as aircraft. We determined that these types of funding did not constitute direct assistance to the partner countries and did not include these funding amounts in our totals. In addition, in certain cases, agencies reported that they did not allocate non-CARSI funding to activities supporting CARSI goals in advance, but that they disbursed resources to programs that supported CARSI goals as needs arose. In these cases, we worked with the agencies to determine whether or not the disbursed amounts could be considered as equivalent to the allocation amounts given the nature of how the agencies’ programming was executed and made adjustments accordingly. To assess the reliability of the non-CARSI data provided, we collected information from agency officials regarding their methodology for determining what non-CARSI funding to include as supporting CARSI goals and the process they used for generating the data. We worked with agencies to make adjustments to these methodologies if we identified concerns. As part of this effort, we gathered information from the agencies on potential risks of underestimates or overestimates of the allocation amounts they reported and how we might mitigate any potential overestimates. We then took steps to mitigate these issues to the extent possible. For example, some agencies provided us with funding data for regional programs that benefited both partner countries and non-CARSI countries. In these cases, we worked with the agencies to determine if there was an appropriate way of apportioning a percentage of the costs to the partner countries versus the other non-CARSI beneficiary countries. If possible, we adjusted the numbers accordingly; if adjustments were not feasible, we did not include the funding amounts in our totals. As part of our data reliability assessments, we also reviewed information on the underlying data systems used to produce the data and the checks, controls, and reviews the agencies perform to ensure the accuracy and reliability of data in these systems. There are certain inherent limitations in the data we collected because agencies were asked to make determinations, using their own judgments, about what portions of their non-CARSI funding supported CARSI goals. However, we believe that the steps we have taken mitigate these limitations, to the extent possible. Given this, we determined that, for the purposes of this report, the data were sufficiently reliable to provide estimates of agencies’ non-CARSI funding that supported CARSI goals. To determine the types of activities that this non-CARSI assistance funded, we reviewed documentation from U.S. agencies and also conducted interviews with agency officials at headquarters and in our three site-visit countries. To examine whether U.S. agencies took steps to consider partner country needs, absorptive capacities, and related U.S. and non-U.S. investments when selecting activities to fund under CARSI, we interviewed State and USAID officials at headquarters and at the embassies in the three partner countries we visited. In addition, we submitted specific written questions to two bureaus at State and USAID at headquarters and received written response documents on the steps State and USAID officials used to help identify and consider these key factors when selecting activities for funding under CARSI. We also worked with State officials at headquarters to develop written questions for the embassies in all seven partner countries on the steps they used to help identify and consider these key factors when selecting CARSI activities. We received comprehensive written response documents from the embassies in all partner countries with information cleared at the Deputy Chief of Mission level. We reviewed and analyzed the written response documents we received from two bureaus at State and USAID at headquarters and from embassies in all seven partner countries. Using these various data sources, we identified specific steps that State and USAID officials used to consider partner country needs, absorptive capacities, and investments when selecting CARSI activities. We also reviewed additional available written documentation on the steps State and USAID used to help identify and consider key factors, such as various assessment reports produced by State, USAID, and other agency officials; trip reports and status reports produced by agency officials; summary agendas from interagency meetings held at embassies and in headquarters; and documentation on the management and coordination of CARSI activities. We did not assess the extent or effectiveness of the steps that State and USAID took to identify and consider partner country needs, absorptive capacities, or U.S. and non-U.S. investments. To examine information on the extent to which U.S. agencies reported CARSI results and evaluated CARSI activities, we interviewed State and USAID officials at headquarters and U.S. officials at the embassies in the three partner countries we visited. In addition, we submitted questions and received written responses from State and USAID headquarters, as well as from the embassies in all seven partner countries, which provided additional information on agencies’ results reporting and evaluation of CARSI activities. Using this information, we identified the key mechanisms State and USAID use for reporting CARSI results at the program, country, and project level. At the initiative level, we reviewed the WHA annual Performance Plan and Reports for fiscal years 2009 through 2012 and the interagency strategy for citizen security in Central America and assessed the types of CARSI results identified in these documents. At the country level, we analyzed a non-probability sample of 55 monthly CARSI implementation reports produced by embassies in the partner countries. We selected this sample to ensure that we obtained a mix of old and recent reports from all 7 countries. This sample contained eight reports from each of the seven partner countries, except for Nicaragua, which provided seven reports, and included the three most recent reports produced by each embassy as of May 2013, as well as reports from earlier years going back to fiscal year 2009. At the country level, we also reviewed completed INL annual End-Use Monitoring Reports from each of the seven partner countries for fiscal years 2009 to 2012 and a USAID- selected sample of five portfolio reviews from USAID offices in partner countries. Finally, we analyzed five INL Quarterly Desk Reviews and six USAID project reports to determine the types of CARSI results identified in project-level reporting. These reports were selected by State and USAID respectively as examples of their project-level reporting. We also compared U.S. agencies’ actions to assess and report their progress toward achieving the objectives in the interagency strategy for Central America against key considerations that we identified in 2012 for implementing interagency collaboration mechanisms. work, we found that one key feature in the successful implementation of such mechanisms is the development of a system for monitoring and reporting on results. In addition, we compared agencies’ activities against leading practices we identified in 1996 for performance management of federal programs. We developed this list of considerations through a review of relevant literature on collaboration mechanisms, interviews with experts on collaboration, and a review of findings from a number of our previous reports on collaboration in the federal government. See GAO, Managing for Results: Key Considerations for Implementing Interagency Collaborative Mechanisms, GAO-12-1022 (Washington, D.C.: Sept. 27, 2012). ongoing, or planned evaluations of CARSI. From USAID, we gathered information on the scope and methodology, current status, and expected uses of their impact evaluation of their municipal crime prevention program. We also gathered testimonial evidence from State on INL’s planned evaluation of its CARSI activities. In addition, we reviewed State’s 2012 Program Evaluation Policy and determined the extent to which INL and WHA had selected CARSI activities for evaluation in their bureau evaluation plans for fiscal years 2012 through 2014. We conducted this performance audit from August 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To demonstrate how funding for Central America Regional Security Initiative (CARSI) activities have been allocated, obligated, and disbursed, we are providing the status of funds provided for CARSI activities as of June 1, 2013. The following tables show CARSI funds by account, describing how U.S. agencies have allocated, obligated, and disbursed funds (by year of appropriation) toward activities in partner countries. In addition, the tables show unobligated balances, which is the portion of an obligational authority that has not yet been obligated, and unliquidated obligations (or obligated balance), which is the amount of obligation already incurred for which payment has not yet been made. Funding for CARSI activities has primarily come from the International Narcotics Control and Law Enforcement (INCLE) and Economic Support Fund (ESF) accounts. In earlier years, funding also came from the Nonproliferation, Anti-terrorism, Demining, and Related Programs (NADR) and Foreign Military Financing (FMF) accounts. The Department of State’s (State) Bureau for International Narcotics and Law Enforcement Affairs administers the CARSI INCLE funds. As of June 1, 2013, State had allocated the largest amount of its CARSI INCLE funds to Guatemala, regional programs, Honduras, and El Salvador (see table 6). In addition, State had disbursed approximately $122 million of INCLE funds to support partner countries (see table 7). In providing technical comments on a draft of this report, State officials reported an amount of close to $10.6 million in INCLE funding that was allocated for CARSI activities in fiscal year 2010 that had not been previously reported to GAO. State officials also said that they could not provide obligation or disbursement information related to this amount, because these INCLE funds are centrally managed and State’s financial systems do not allow them to track such funds by region or country. According to State, this is why these funds were not previously reported to GAO. We followed up with State officials to confirm that the funds had been applied to CARSI activities and to document the programs toward which the funds had been applied. Although State officials were not able to provide information on the obligation or disbursement of these funds, we have included this amount in the INCLE funding allocated for CARSI activities. The United States Agency for International Development (USAID) shares responsibility with State to administer the ESF account. USAID oversees the implementation of most programs funded from this account, according to USAID officials; State’s Bureau for Western Hemisphere Affairs administers State’s portion of ESF. As of June 1, 2013, USAID had allocated the largest amounts of its ESF funds for CARSI activities to El Salvador, Guatemala, and Honduras (see table 8). Furthermore, USAID had disbursed approximately $51 million of ESF funds to support CARSI activities (see table 9). For fiscal year 2013, USAID officials explained that the agency has not yet been allocated funds from the Office of Management and Budget that Congress appropriated for fiscal year 2013. Therefore, the disbursement data provided below in table 9 for fiscal year 2013 are of funds allocated only in prior years, and table 8 reflects no allocations for fiscal year 2013. As of June 1, 2013, State had allocated the largest amounts of its ESF funds for CARSI activities to Costa Rica, Belize, and Panama (see table 10). Furthermore, State had disbursed approximately $10 million of ESF funding for CARSI activities (see table 11). In addition, State officials explained that the agency has not yet been allocated funds that Congress appropriated for fiscal year 2013. Therefore, the disbursement data provided below in table 11 for fiscal year 2013 are only of funds allocated in prior years, and table 10 reflects no allocations for fiscal year 2013. State’s Bureau of International Security and Nonproliferation and its Bureau of Counterterrorism administer CARSI NADR funds. NADR funds were allocated for Central American countries under the Mérida Initiative only for fiscal year 2008. NADR Export Control and Related Border Security (EXBS) and Counterterrorism (CT) funds were used to support activities in partner countries. As of June 1, 2013, the largest amount of funds had been allocated for NADR-EXBS activities, and 96 percent of those allocated funds had been disbursed (see table 12). Slightly more than $6 million of CARSI NADR-EXBS and NADR-CT funds were disbursed as of June 1, 2013 (see table 13). According to State officials, it is not possible to provide a country-by-country breakout of CARSI NADR- EXBS funds disbursed because the funds are intended for regional programming. This appendix provides the status of Central America Regional Security Initiative (CARSI) Foreign Military Financing (FMF) funds as of June 1, 2013. Table 1 describes how U.S. agencies have allocated and committed FMF funds (by year of appropriation) toward activities in partner countries. The presentation of FMF allocations and commitments is different from presentations on allocations, obligations, and disbursements on the other CARSI accounts in appendix II because FMF funds are budgeted and tracked in a different way. The Defense Security Cooperation Agency (DSCA) and the Defense Financing and Accounting Service (DFAS) are responsible for the financial systems that account for FMF funds, as well as tracking the implementation and disbursement of those funds. According to DSCA officials, FMF funds are obligated upon apportionment. Further, DSCA’s system can only track FMF uncommitted and committed amounts, not unliquidated obligations or disbursements. DFAS tracks disbursements using the Defense Integrated Finance System; however, there is no direct link between the DSCA and DFAS systems, and the DFAS system does not track funding for specific initiatives, such as CARSI. The Department of State (State) allocated close to $26 million of FMF funds for Central American countries for activities under the Mérida Initiative from fiscal years 2008 to 2010. From fiscal years 2008 to 2010, State allocated the largest amounts of these FMF funds to El Salvador, Costa Rica, and Panama. As of June 1, 2013, approximately 90 percent of the total allocated amount had been committed (see table 14). In addition to the contact named above, Valérie L. Nowak (Assistant Director), Ian Ferguson, Marisela Perez, Ryan Vaughan, and Debbie Chung made key contributions to this report. Martin de Alteriis, Ashley Alley, Lynn Cothern, and Etana Finkler also provided assistance.
Drug trafficking organizations and gangs have expanded in Central America, threatening the security of these countries and the United States. Since 2008, the U.S. government has helped Central America and Mexico respond to these threats and in 2010 established CARSI solely to assist Central America. CARSI's goals are to create safe streets, disrupt criminals and contraband, support capable governments, and increase state presence and cooperation among CARSI partners. GAO reported on CARSI funding in January 2013 and was asked to further review CARSI and related activities in Central America. This report (1) provides an updated assessment of U.S. agencies' funding and activities that support CARSI goals; (2) examines whether U.S. agencies took steps to consider partner country needs, absorptive capacities, and U.S. and non-U.S. investments when selecting CARSI activities; and (3) examines information on the extent to which U.S. agencies reported CARSI results and evaluated CARSI activities. GAO analyzed CARSI and complementary non-CARSI funding; reviewed documents on CARSI activities, partner country needs, and CARSI results; interviewed U.S. agency officials about CARSI and related activities; and observed CARSI activities in three countries. Since fiscal year 2008, U.S. agencies allocated over $1.2 billion in funding for Central America Regional Security Initiative (CARSI) activities and non-CARSI funding that supports CARSI goals. As of June 1, 2013, the Department of State (State) and the United States Agency for International Development (USAID) obligated at least $463 million of the close to $495 million in allocated funding for CARSI activities, and disbursed at least $189 million to provide partner countries with equipment, technical assistance, and training to improve interdiction and disrupt criminal networks. Moreover, as of March 31, 2013, U.S. agencies estimated that they had allocated approximately $708 million in non-CARSI funding that supports CARSI goals, but data on disbursements were not readily available. U.S. agencies, including State, the Department of Defense (DOD), and the Department of Justice, use this funding to provide equipment, technical assistance, and training, as well as infrastructure and investigation assistance to partner countries. For example, DOD allocated $25 million in funding to help Guatemala establish an interagency border unit to combat drug trafficking. State and USAID took a variety of steps--using assessment reports, outreach meetings with host governments and other donors, and interagency meetings--to help identify and consider partner countries' needs, absorptive capacities, and related U.S. and non-U.S. investments when selecting CARSI activities. For example, State used an assessment report on crime scene investigation and forensic programs and capacities of six partner countries to inform decisions on selecting CARSI activities. In addition, USAID officials used assessment reports to help identify and consider partner country juvenile justice and community policing needs and absorptive capacities; these assessment reports included specific recommendations for designing and selecting juvenile justice and community policing projects in partner countries. Also, in one partner country, embassy officials used donor outreach meetings to identify another donor's significant investment in police intelligence in the partner country; the embassy consequently reduced funding for CARSI activities in that area. While U.S. agencies have reported on some CARSI results, they have not assessed progress in meeting interagency objectives for Central America. State and USAID have reported some CARSI results through various mechanisms at the initiative, country, and project levels. For example, one embassy reported that its CARSI-supported anti-gang education project had expanded nationwide and taught over 3,000 children over 3 years of the program. However, U.S. agencies have not assessed their performance using the metrics outlined in a 2012 interagency strategy for Central America that were designed to measure the results of CARSI and related non-CARSI activities. GAO recognizes that collecting performance data may be challenging and that the metrics could require some adjustments. Nevertheless, assessing progress toward achieving the strategy's objectives could help guide U.S. agencies' decisions about their activities and identify areas for improvement. In addition to ongoing assessments of progress, GAO has concluded in prior work that evaluations are important to obtain more in-depth information on programs' performance and context. USAID is conducting an evaluation of its CARSI crime prevention programming to be completed in 2014. State officials said that they are planning to conduct an evaluation of some of their CARSI activities beginning in fiscal year 2014. GAO recommends that State and USAID work with other agencies to assess progress in achieving the objectives of the interagency strategy for Central America. State and USAID concurred with the recommendation.
A key characteristic of the NFIP is the extent to which FEMA must rely on others to achieve the program’s goals. FEMA’s role for the NFIP is principally one of establishing policies and standards that others generally implement on a day-to-day basis and providing financial and management oversight of those who carry out those day-to-day responsibilities. These responsibilities include ensuring that property owners who are required to purchase flood insurance do so, enforcing flood plain management and building regulations, selling and servicing flood insurance policies, and updating and maintaining the nation’s flood maps. In our prior work, we have identified several major challenges facing the NFIP: Reducing losses to the program resulting from policy subsidies and repetitive loss properties. The program is not actuarially sound because of the number of policies in force that are subsidized—about 29 percent at the time of our 2003 report. As a result of these subsidies, some policyholders with dwellings that were built before flood plain management regulations were established in their communities pay premiums that represent about 35 to 40 percent of the true risk premium. In January 2006, FEMA estimated a shortfall in annual premium income because of policy subsidies at $750 million. Moreover, at the time of our 2004 report, there were about 49,000 repetitive loss properties—those with two or more losses of $1,000 or more in a 10- year period—representing about 1 percent of the 4.4 million buildings insured under the program. From 1978 until March 2004, these repetitive loss properties represented about $4.6 billion in claims payments. Increasing property owner participation in the program. The extent of noncompliance with current mandatory purchase requirements by affected property owners is unknown. Some interest has been expressed in Congress in assessing the feasibility of expanding mandatory purchase requirements beyond current special high-risk flood hazard areas. FEMA and its private insurance partners also have efforts underway to increase participation in the NFIP by marketing flood insurance policies in areas where purchase is not mandatory. Developing accurate, digital flood maps. The impact of Hurricanes Katrina, Rita, and Wilma on homeowners has highlighted the importance of having accurate, up-to-date flood maps that identify the areas at risk of flooding and, thus, the areas in which homeowners would benefit from purchasing flood insurance. In our report on the NFIP’s flood map modernization program, we discussed the multiple uses and benefits of accurate, digital flood plain maps. However, the NFIP faces a major challenge in working with its contractor and state and local partners of varying technical capabilities and resources to produce accurate, digital flood maps. In developing those maps, we recommended that FEMA develop and implement data standards that will enable FEMA, its contractor, and its state and local partners to identify and use consistent data collection and analysis methods for developing maps for communities with similar flood risk. Providing effective oversight of flood insurance operations. In our October 2005 report, we said that FEMA faces a challenge in providing effective oversight of the 95 insurance companies and thousands of insurance agents and claims adjusters who are primarily responsible for the day-to-day process of selling and servicing flood insurance policies. To the extent possible, the NFIP is designed to pay operating expenses and flood insurance claims with premiums collected on flood insurance policies rather than with tax dollars. However, as we have reported, the program, by design, is not actuarially sound because Congress authorized subsidized insurance rates to be made available for policies covering some properties to encourage communities to join the program. As a result, the program does not collect sufficient premium income to build reserves to meet the long-term future expected flood losses. FEMA has statutory authority to borrow funds from the Treasury to keep the NFIP solvent. Until the 2004 hurricane season, FEMA had been generally successful in keeping the NFIP on sound financial footing, exercising its borrowing authority three times in the last decade when losses exceeded available fund balances. In each instance, FEMA repaid the funds with interest. According to FEMA officials, as of August 31, 2005, FEMA had outstanding borrowing of $225 million with cash on hand totaling $289 million. FEMA had substantially repaid the borrowing it had undertaken to pay losses incurred for the 2004 hurricane season that, until Hurricane Katrina struck, was the worst hurricane season on record for the NFIP. FEMA’s current debt with the Treasury is almost entirely for payment of claims from Hurricanes Katrina and Rita and other flood events that occurred in 2005. As the destruction caused by horrendous 2004 and 2005 hurricanes are a driving force for improving the NFIP today, devastating natural disasters in the 1960s were a primary reason for the national interest in creating a federal flood insurance program. In 1963 and 1964, Hurricane Betsy and other hurricanes caused extensive damage in the South, and, in 1965, heavy flooding occurred on the upper Mississippi River. In studying insurance alternatives to disaster assistance for people suffering property losses in floods, a flood insurance feasibility study found that premium rates in certain flood-prone areas could be extremely high. As a result, the National Flood Insurance Act of 1968, which created the NFIP, mandated that existing buildings in flood-risk areas would receive subsidies on premiums because these structures were built before the flood risk was known and identified on flood insurance rate maps. Owners of structures built in flood-prone areas on or after the effective date of the first flood insurance rate maps in their areas or after December 31, 1974, would have to pay full actuarial rates. Because many repetitive loss properties were built before either December 31, 1974, or the effective date of the first flood insurance rate maps in their areas, they were eligible for subsidized premium rates under provisions of the National Flood Insurance Act of 1968. The provision of subsidized premiums encouraged communities to participate in the NFIP by adopting and agreeing to enforce state and community floodplain management regulations to reduce future flood damage. In April 2005, FEMA estimated that floodplain management regulations enforced by communities participating in the NFIP have prevented over $1.1 billion annually in flood damage. However, the policy subsidies reduce premium income and add risk to the NFIP. In January 2006, FEMA estimated an annual shortfall in premium income of $750 million because of policy subsidies. FEMA estimated that phasing out subsidized rates for non-primary residences and nonresidential properties alone would affect about 400,000 properties currently insured by the NFIP. Some have questioned whether providing flood insurance for second homes in high risk areas—such as barrier islands—encourages development in areas at high risk of flooding. In addition, some of the properties that had received the initial rate subsidy are subject to repetitive flood losses, placing added financial strain on the NFIP. In reauthorizing the NFIP in 2004, Congress noted that “repetitive-loss properties”—those that had resulted in two or more flood insurance claims payments of $1,000 or more over 10 years—constituted a significant drain on the resources of the NFIP. These repetitive loss properties are problematic not only because of their vulnerability to flooding but also because of the costs of repeatedly repairing flood damages. While these properties make up only about 1 percent of the properties insured under the NFIP, they account for 25 to 30 percent of all claims losses. At the time of our March 2004 report on repetitive loss properties, there were about 49,000 repetitive loss properties, representing about $4.6 billion in claims payments from 1978 until March 2004. As of March 2004, nearly half of all nationwide repetitive loss property insurance payments had been made in Louisiana, Texas, and Florida. According to a recent Congressional Research Service report, as of December 31, 2004, FEMA had identified 11,706 “severe repetitive loss” properties, defined as those with four or more claims or two or three losses that exceeded the insured value of the property. Of these 11,706 properties almost half (49 percent) were in three states—3,208 (27 percent) in Louisiana, 1,573 (13 percent) in Texas, and 1,034 (9 percent) in New Jersey. A significant number of repetitive loss properties were affected by Hurricanes Katrina and Rita. According to NFIP statistical data through November 30, 2005, 4,835 repetitive loss properties, including 3,183 in Louisiana, had substantial damage from Hurricane Katrina. Two hundred and forty-three repetitive loss properties had substantial damage from Hurricane Rita. Of these properties, 213 were located in Louisiana and 30 were located in Texas. For over a decade, FEMA has pursued a variety of strategies to reduce the number of repetitive loss properties in the NFIP inventory. In a 2004 testimony, we noted that congressional proposals have been made to phase out coverage or begin charging full and actuarially based rates for repetitive loss property owners who refuse to accept FEMA’s offer to purchase or mitigate the effect of floods on these buildings. The 2004 Flood Insurance Reform Act created a 5-year pilot program to deal with repetitive-loss properties in the NFIP. In particular, the act authorized FEMA to provide financial assistance to participating states and communities to carry out mitigation activities or to purchase “severe repetitive loss properties.” During the pilot program, policyholders who refuse a mitigation or purchase offer that meets program requirements will be required to pay increased premium rates. Specifically, the premium rates for these policyholders would increase by 150 percent following their refusal and another 150% following future claims of more than $1,500. However, the rates charged cannot exceed the applicable actuarial rate. Because of the financial drain that repetitive loss properties have posed for the program, it will be important in future studies of the NFIP to continue to analyze data on progress being made to reduce the inventory of subsidized NFIP properties, particularly those with repetitive losses; how the reduction of this inventory contributes to the financial stability of the program; and whether additional FEMA regulatory steps or congressional actions could contribute to the financial solvency of the NFIP, while meeting commitments made by the authorizing legislation. In 1973 and 1994, Congress enacted requirements for mandatory purchase of NFIP policies by some property owners in high-risk areas. From 1968 until the adoption of the Flood Disaster Protection Act of 1973, the purchase of flood insurance was voluntary. However, because voluntary participation in the NFIP was low and many flood victims did not have insurance to repair damages from floods in the early 1970s, the 1973 act required the mandatory purchase of flood insurance to cover some structures in special flood hazard areas of communities participating in the program. Homeowners with mortgages held by federally-regulated lenders on property in communities identified by FEMA to be in special flood hazard areas are required to purchase flood insurance on their dwellings for the amount of their outstanding mortgage balance, up to a maximum of $250,000 in coverage for single family homes. The owners of properties with no mortgages or properties with mortgages held by lenders who are not federally regulated were not, and still are not, required to buy flood insurance, even if the properties are in special flood hazard areas—the areas NFIP flood maps identify as having the highest risk of flooding. FEMA determines flood risk and actuarial ratings on properties through flood insurance rate mapping and other considerations, including the elevation of the lowest floor of the building, the type of building, the number of floors, and whether or not the building has a basement, among other factors. FEMA flood maps designate areas for risk of flooding by zones. For example, areas subject to damage by waves and storm surge are in the zone with the highest expectation for flood loss. Between 1973 and 1994, many policyholders continued to find it easy to drop policies, even if the policies were required by lenders. Federal agency lenders and regulators did not appear to strongly enforce the mandatory flood insurance purchase requirements. According to a recent Congressional Research Service study, the Midwest flood of 1993 highlighted this problem and reinforced the idea that reforms were needed to compel lender compliance with the requirements of the 1973 Act. In response, Congress passed the National Flood Insurance Reform Act of 1994. Under the 1994 law, if the property owner failed to get the required coverage, federally-regulated lenders were required to purchase flood insurance on their behalf and then bill the property owners. Lenders became subject to civil monetary penalties for not enforcing the mandatory purchase requirement. In June 2002, we reported that the extent to which lenders were enforcing the mandatory purchase requirement was unknown. Officials involved with the flood insurance program developed contrasting viewpoints about whether lenders were complying with the flood insurance purchase requirements primarily because the officials used differing types of data to reach their conclusions. Federal bank regulators and lenders based their belief that lenders were generally complying with the NFIP’s purchase requirements on regulators’ examinations and reviews conducted to monitor and verify lender compliance. In contrast, FEMA officials believed that many lenders frequently were not complying with the requirements, which was an opinion based largely on noncompliance estimates computed from data on mortgages, flood zones, and insurance policies; limited studies on compliance; and anecdotal evidence indicating that insurance was not always in place where required. Neither side, however, was able to substantiate its differing claims with statistically sound data that provide a nationwide perspective on lender compliance. Under FEMA’s current Mandatory Purchase of Flood Insurance Guidelines, properties in a 100-year flood plain with a statistical 1 in 100 chance of flooding in any given year or a 30 percent chance of flooding during the period of a 30-year mortgage are designated to be in special flood hazard areas. Within the boundaries of these areas, homeowners with mortgages from federal regulated lenders are required to purchase flood insurance for an amount equal to their outstanding mortgage balance, up to the maximum policy limit of $250,000 for a single-family home. To expand the NFIP policyholder base, there has been some congressional interest in the feasibility of extending the current mandatory purchase requirement to properties in a 500-year flood plain, which statistically have a 1 in 500 chance of flooding in any given year. FEMA has estimated that expanding NFIP mandatory purchase requirements to include structures in the 500-year flood plain would generate up to $700 million in additional premiums. The current annual premium for a structure in the 500-year flood plain is about $280. However, a FEMA official cautioned that the rate of compliance is an important component of any estimate of the amount of increase in NFIP premiums that would result from expanding mandatory purchase requirements. It would be difficult to effectively assess the impacts, effectiveness, and feasibility of such a change in the structure of the NFIP. We share FEMA’s concerns related to enforcing and assessing compliance. We also believe that it would be difficult to assess the impacts an expansion in the mandatory purchase requirements would have upon a range of stakeholders, including not only home and business owners, but lenders, mortgage servicers, builders, and local governments, among others. We also recognize that it would be difficult and costly to determine the additional geographic area that would be encompassed in an expanded special flood hazard area. Current flood mapping focuses on the boundaries of the 100-year flood plain, and FEMA has not estimated the additional cost and time required to complete detailed, digitalized maps of areas outside of the current 100-year special flood hazard area. In recent years, the number of NFIP policyholders did not grow substantially. FEMA officials reported a pattern in which at the start of each hurricane season, the number of polices in force was the same or less than the number of policies in previous years. During the hurricane season, the number of polices in force would increase slightly and then level off or decline again at the end of the season. FEMA has efforts underway to increase NFIP participation by improving the quality of information that is available on the NFIP and flood risks and by marketing to retain policyholders currently in the program. In October 2003, FEMA let a contract for a new integrated marketing campaign called “FloodSmart.” Marketing elements being used include direct mail, national television commercials, print advertising, and websites designed for consumers and insurance agents. According to FEMA officials, in a little more than 2 years since the contract began, net policy growth was a little more than 7 percent and policy retention improved from 88 percent to 91 percent. Accurate flood maps that identify the areas at greatest risk of flooding are the foundation of the NFIP. Flood maps must be periodically updated to assess and map changes in the boundaries of floodplains that result from community growth, development, erosion, and other factors that affect the boundaries of areas at risk of flooding. FEMA has embarked on a multi- year effort to update the nation’s flood maps at a cost in excess of $1 billion. The maps are principally used by (1) the approximately 20,000 communities participating in the NFIP to adopt and enforce the program’s minimum building standards for new construction within the maps’ identified flood plains; (2) FEMA to develop accurate flood insurance policy rates based on flood risk; and (3) federal regulated mortgage lenders to identify those property owners who are statutorily required to purchase federal flood insurance. FEMA expects that by producing more accurate and accessible digital flood maps, the NFIP and the nation will benefit in three ways. First, communities can use more accurate digital maps to reduce flood risk within floodplains by more effectively regulating development through zoning and building standards. Second, accurate digital maps available on the Internet will facilitate the identification of property owners who are statutorily required to obtain or who would be best served by obtaining flood insurance. Third, accurate and precise data will help national, state, and local officials to accurately locate infrastructure and transportation systems (e.g., power plants, sewage plants, railroads, bridges, and ports) to help mitigate and manage risk for multiple hazards, both natural and man-made. Success in updating the nation’s flood maps requires clear standards for map development; the coordinated efforts and shared resources of federal, state, and local governments; and the involvement of key stakeholders who will be expected to use the maps. In developing the new data system to update flood maps across the nation, FEMA’s intent is to develop and incorporate flood risk data that are of a level of specificity and accuracy commensurate with communities’ relative flood risks. Not every community may need the same level of specificity and detail in its new flood maps. However, it is important that FEMA establish standards for the appropriate data and level of analysis required to develop maps for all communities of a similar risk level. In its November 2004 Multi-Year Flood Hazard Identification Plan, FEMA discussed the varying types of data collection and analysis techniques the agency plans to use to develop flood hazard data in order to relate the level of study and level of risk for each of 3,146 counties. FEMA has developed targets for resource contributions (in-kind as well as dollars) by its state and local partners in updating the nation’s flood maps. At the same time, it has developed plans for reaching out to and including the input of communities and key stakeholders in the development of the new maps. These expanded outreach efforts reflect FEMA’s understanding that it is dependent upon others to achieve the benefits of map modernization. As I have discussed, it is important when considering any expansion of mandatory purchase requirements for NFIP policies to understand that implementation would require the development of additional detailed flood maps. According to a FEMA official, digital mapping of areas outside of special flood hazard areas is currently being considered on only a selective basis for reasons such as potential changes in risk level or population growth. To meet its monitoring and oversight responsibilities, FEMA is to conduct periodic operational reviews of the 95 private insurance companies that participate in the NFIP. In addition, FEMA’s program contractor is to check the accuracy of claims settlements by doing quality assurance reinspections of a sample of claims adjustments for every flood event. For operational reviews, FEMA examiners are to do a thorough review of the companies’ NFIP underwriting and claims settlement processes and internal controls, including checking a sample of claims and underwriting files to determine, for example, whether a violation of policy has occurred, an incorrect payment has been made, and if files contain all required documentation. Separately, FEMA’s program contractor is responsible for conducting quality assurance reinspections of a sample of claims adjustments for specific flood events in order to identify, for example, whether an insurer allowed an uncovered expense or missed a covered expense in the original adjustment. According to FEMA, these monitoring and oversight mechanisms will be in place to assess the implementation of the NFIP after Hurricanes Katrina and Rita. In addition, FEMA plans to do additional oversight of claims for these storms that were handled using expedited procedures. To try to assist NFIP policyholders despite obstacles in communicating with claimants, reaching flooded properties, and locating records, FEMA allowed expedited claims processing procedures that were unique to these storms. In some circumstances, claims were settled without site visits by certified flood claims adjusters. For flooding caused by the failure of the levees in the New Orleans area, resulting in flooding from Lake Pontchartrain, FEMA allowed the use of flood depth data to identify structures that had been severely affected. If data on the depth and duration of the water in the building showed that it was likely that covered damage exceeded policy limits, claims could be settled without a site visit by a claims adjuster. Similarly, losses in other areas of Louisiana and Mississippi were handled without a site visit where structures were washed off their foundations by flood waters and square-foot measurements of the dwellings were known. The operational reviews and follow-up visits to insurance companies that we analyzed during 2005 followed FEMA’s internal control procedures for identifying and resolving specific problems that may occur in individual insurance companies’ processes for selling and renewing NFIP policies and adjusting claims. According to information provided by FEMA, the number of operational reviews completed between 2000 and August 2005 were done at a pace that allows for a review of each participating insurance company at least once every 3 years, as FEMA procedures require. In addition, the processes FEMA had in place for operational reviews and quality assurance reinspections of claims adjustments met our internal control standard for monitoring federal programs. However, the process FEMA used to select a sample of claims files for operational reviews and the process its program contractor used to select a sample of adjustments for reinspections were not randomly chosen or statistically representative of all claims. We found that the selection processes used were, instead, based upon judgmental criteria including, among other items, the size and location of loss and complexity of claims. As a result of limitations in the sampling processes, FEMA cannot project the results of these monitoring and oversight activities to determine the overall accuracy of claims settled for specific flood events or assess the overall performance of insurance companies and their adjusters in fulfilling their responsibilities for the NFIP—actions necessary for FEMA to meet our internal control standard that it have reasonable assurance that program objectives are being achieved and that its operations are effective and efficient. To strengthen and improve FEMA’s monitoring and oversight of the NFIP, we recommended that FEMA use a methodologically valid approach for sampling files selected for operational reviews and quality assurance claims reinspections. We also plan to follow up on the results of the monitoring and oversight efforts for claims processed using expedited processes in our review of the implementation of the NFIP after Hurricanes Katrina and Rita. FEMA did not agree with our recommendation. It noted that its current sampling methodology of selecting a sample based on knowledge of the population to be sampled was more appropriate for identifying problems than the statistically random probability sample we recommended. Although FEMA’s current nonprobability sampling strategy may provide an opportunity to focus on particular areas of risk, it does not provide management with the information needed to assess the overall performance of private insurance companies and adjusters participating in the program—information that FEMA needs to have reasonable assurance that program objectives are being achieved. As of January 2006, FEMA had not yet fully implemented provisions of the Flood Insurance Reform Act of 2004. Among other things, the act requires FEMA to provide policyholders a flood insurance claims handbook; to establish a regulatory appeals process for claimants; and to establish minimum education and training requirements for insurance agents who sell NFIP policies. The 6-month statutory deadline for implementing these changes was December 30, 2004. In September 2005, FEMA posted a flood insurance claims handbook on its Web site. The handbook contains information on anticipating, filing and appealing a claim through an informal appeals process, which FEMA intends to use pending the establishment of a regulatory appeals process. However, because the handbook does not contain information regarding the appeals process that FEMA is statutorily required to establish through regulation, it does not yet meet statutory requirements. With respect to this appeals process, FEMA has not stated how long rulemaking might take to establish the process by regulation, or how the process might work, such as filing requirements, time frames for considering appeals, and the composition of an appeals board. In January 2006, the acting director of FEMA’s Mitigation Division said that FEMA had submitted a draft rule to DHS. However, milestones for future actions were not established. Claimants who wish to appeal decisions made on their claims for damage from Hurricanes Katrina and Rita can follow a process described by FEMA as an “informal” appeals process. As outlined in the Flood Insurance Claims Handbook, to appeal, policyholders are to submit statements of their concerns and supporting documentation to the director of claims in FEMA’s Mitigation Division, Risk Insurance Branch. With respect to minimum training and education requirements for insurance agents who sell NFIP policies, FEMA published a Federal Register notice on September 1, 2005, which included an outline of training course materials. In the notice, FEMA stated that, rather than establish separate and perhaps duplicative requirements from those that may already be in place in the states, it had chosen to work with the states to implement the NFIP requirements through already established state licensing schemes for insurance agents. The notice did not specify how or when states were to begin implementing the NFIP training and education requirements. Thus, it is too early to tell the extent to which insurance agents will meet FEMA’s minimum standards. FEMA officials said that, because changes to the program could have broad reaching and significant effects on policyholders and private-sector stakeholders upon whom FEMA relies to implement the program, the agency is taking a measured approach to addressing the changes mandated by Congress. Nonetheless, without plans with milestones for completing its efforts to address the provisions of the act, FEMA cannot hold responsible officials accountable or ensure that statutorily required improvements are in place to assist victims of future flood events. We recommended that FEMA develop documented plans with milestones for implementing requirements of the Flood Insurance Reform Act of 2004 to provide policyholders a flood insurance claims handbook that meets statutory requirements, to establish a regulatory appeals process, and to ensure that flood insurance agents meet minimum NFIP education and training requirements. We will continue to monitor progress being made. FEMA disagreed with our recommendation and characterization of the extent to which FEMA has met provisions of the Flood Insurance Reform Act of 2004. We believe that our description of those efforts and our recommendations with regard to implementing the act’s provisions are valid. For example, although FEMA commented that it was offering claimants an informal appeals process in its flood insurance claims handbook, it must establish regulations for this process, and those are not yet complete. The most immediate challenge for the NFIP is processing the flood insurance claims resulting from Hurricanes Katrina and Rita. Progress is being made in that area. In December 2005, according to FEMA, more than 70 percent of Hurricane Katrina claims had been paid, totaling more than $11 billion, some of them using expedited procedures to assist policyholders who were displaced from their homes. In the longer term, Congress and the NFIP face a complex challenge in assessing potential changes to the program that would improve its financial stability, increase participation in the program by property owners in areas at risk of flooding, reduce the number of repetitive loss properties in the program, and maintain current and accurate flood plain maps. These issues are complex, interrelated, and are likely to involve trade-offs. For example, increasing premiums to better reflect risk may reduce voluntary participation in the program or encourage those who are required to purchase flood insurance to limit their coverage to the minimum required amount (i.e., the amount of their outstanding mortgage balance). This in turn can increase taxpayer exposure for disaster assistance resulting from flooding. There is no “silver bullet” for improving the current structure and operations of the NFIP. It will require sound data and analysis and the cooperation and participation of many stakeholders. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions you and the Committee Members may have. Contact point for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Norman Rabkin at (202) 512-8777 or rabkinn@gao.gov, or William O. Jenkins Jr. at (202) 512-8757 or jenkinswo@gao.gov. This statement was prepared under the direction of Christopher Keisling. Key contributors were John Bagnulo, Christine Davis, and Deborah Knorr. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Flood Insurance Program (NFIP), established in 1968, provides property owners with some insurance coverage for flood damage. The Federal Emergency Management Agency (FEMA) within the Department of Homeland Security is responsible for managing the NFIP. The unprecedented magnitude and severity of the flood losses from hurricanes in 2005 challenged the NFIP to process a record number of claims. These storms also illustrated the extent to which the federal government has exposure for claims coverage in catastrophic loss years. FEMA estimates that Hurricanes Katrina, Rita, and Wilma will generate claims and payments of about $23 billion--far surpassing the total claims paid in the entire history of the NFIP. This testimony provides information from past and ongoing GAO work on issues including: (1) NFIP's financial structure; (2) the impact of properties with repetitive flood losses on NFIP's resources; (3) proposals to increase the number of policies in force; and (4) the status of past GAO recommendations. The NFIP, by design, is not actuarially sound. The program does not collect sufficient premium income to build reserves to meet long-term future expected flood losses. In November 2005, FEMA's authority to borrow from the Treasury was increased from $1.5 billion to $18.5 billion through fiscal year 2008 to help pay claims from the 2005 hurricane season. It is highly unlikely that the NFIP as presently funded could generate sufficient revenues to repay a debt of this size. One reason the NFIP is not actuarially sound is because a number of its policies on dwellings that were built before flood plain management regulations were established in their communities are subsidized and pay premiums of 35-40 percent of the true risk premium. In January 2006, FEMA estimated an annual shortfall in premium income of $750 million because of such policy subsidies. Some subsidized properties, called repetitive loss properties, also suffer repetitive flood losses, which accounted for about $4.6 billion in claims payments from 1978 to March 2004. We need to analyze the progress made to reduce the inventory of subsidized repetitive-loss properties and determine whether additional regulatory or congressional action is needed. A challenge for FEMA is to expand the NFIP policyholder base by enforcing mandatory purchase requirements and encouraging voluntary purchase by homeowners who live in areas at lower risk of flooding. The extent of noncompliance with current mandatory purchase requirements for property owners in special flood hazard areas is unknown. There has been some congressional interest in the feasibility of expanding mandatory purchase requirements beyond the current special high-risk areas, however, there are a number of difficulties to assessing the impacts, effectiveness, and feasibility of such a change in the structure of the NFIP, as well as concerns related to enforcing and assessing compliance. For example, more precise flood mapping of areas outside the current high-risk areas would be required to accurately identify affected property owners. FEMA and its private insurance partners also have efforts underway to increase NFIP participation by marketing policies in areas where purchase is not mandatory. FEMA has not yet fully implemented provisions of the Flood Insurance Reform Act of 2004 requiring the agency to develop new materials to explain coverage and the claims process to policyholders, establish an appeals process for claimants, and provide insurance agent education and training requirements. The statutory deadline for implementing these changes was December 30, 2004, and, as of January 2006, FEMA had not developed documented plans with milestones for meeting the provisions of the act, as recommended by GAO.
N-9 was developed as a contraceptive and is the only spermicide available in the United States. It is found in a variety of over-the-counter vaginal contraceptive products—including creams, foams, gels, and suppositories—and on N-9 condoms. Vaginal contraceptive products that contain N-9 have been sold over-the-counter in the United States for almost 50 years. N-9 condoms have been available over-the-counter in the United States since the early 1980s. Three federal agencies within HHS—CDC, NIH, and FDA—have responsibilities that affect the public’s use of N-9 contraceptive products. CDC is responsible for conducting and reviewing research related to public health issues and disseminating this information to state public health agencies, medical professionals, and the public. CDC shares information, for example, on diseases such as HIV and uses a variety of means to do this, including publications such as the Morbidity and Mortality Weekly Report (MMWR) and treatment guidelines. It also provides information through its Web site and in letters issued directly to state health departments and other public health professionals. NIH, which comprises 27 separate institutes and centers, conducts research in its own laboratories and funds research in universities, medical schools, hospitals, and other research institutions. Some of this research investigates drugs used to prevent and treat various diseases, including HIV. In order to provide information to the public on various health issues, NIH may post the results of its research on its Web site and sometimes publishes summary reports on various research topics. FDA is responsible, among other things, for regulating the manufacture and sale of drugs and medical devices sold in the United States. FDA also regulates the labeling information provided by manufacturers of drugs and medical devices. FDA also seeks to educate the public about the products it regulates and uses a variety of means, including pamphlets, Web site information, and the FDA Consumer magazine to do this. Within FDA, two centers are involved in the review of N-9 contraceptive products—the Center for Drug Evaluation and Research (CDER), which oversees vaginal contraceptive drug products, and the Center for Devices and Radiological Health (CDRH), which oversees condoms, including N-9 condoms. FDA reviews the active ingredients in specific categories of drugs that were sold over-the-counter in the United States prior to 1975 through a process known as the “monograph process.” Under the monograph process, FDA establishes the conditions under which specific categories of drugs, rather than specific products, are generally recognized as safe and effective and not misbranded. Examples of categories of drugs subject to the monograph process include antacids and certain cold and cough remedies. When FDA completes the monograph process, it issues a final monograph that describes labeling indications, warnings, and directions for use, among other things. Prior to the issuance of a final monograph, FDA policies allow over-the-counter drugs to stay on the market. However, FDA may pursue regulatory actions against these drugs—such as requiring labeling changes—if the agency determines that the failure to act poses a potential health hazard to consumers. Since N-9 was available over-the-counter prior to 1975, it is the subject of a monograph review for a category of drugs called vaginal contraceptive drug products. As of March 2005, FDA had not issued a final monograph for this category of drugs. Although some condoms on the market are lubricated with N-9, these N-9 condoms are not subject to the monograph review process for vaginal contraceptive products because FDA regulates these products as medical devices. Federal agencies have undertaken a variety of efforts to research the safety and effectiveness of N-9 as a microbicide. This research included funding, conducting, or reviewing studies on the safety and effectiveness of N-9 as a microbicide. It was largely conducted until 2000, when the preliminary results of a major clinical study prompted CDC and NIH to halt further research on N-9 as a microbicide because of safety concerns. FDA—as part of its regulation of vaginal contraceptive products—continued to review research on the safety of N-9 and proposed new warning labels for vaginal contraceptive products in 2003. Manufacturers reviewed the safety of N-9 for the purpose of commenting on the proposed warning labels. During the 1990s, three federal agencies—CDC, NIH, and FDA—funded, conducted, or reviewed medical research to determine whether N-9 was safe and effective as a microbicide. CDC, as part of its public health efforts to prevent the spread of HIV, conducted and funded research involving N-9. For example, in 1996, CDC and others began a major 4-year study on the effectiveness of N-9 vaginal contraceptive products in preventing the transmission of STDs, including HIV, among female sex workers located in four countries. In addition, CDC compiled a bibliography of research on potential microbicides conducted through September 1996. Containing abstracts of over 55 safety and effectiveness studies and other reviews of N-9 vaginal contraceptive products and N-9 condoms, the bibliography was intended as a resource for clinicians, researchers, and public health specialists interested in microbicide research. Like CDC, NIH conducted and funded research during the 1990s on N-9’s safety and effectiveness as a microbicide. Within NIH, two institutes were primarily responsible for most of its N-9 research—the National Institute of Allergy and Infectious Diseases and the National Institute of Child Health and Human Development. NIH’s research ranged from laboratory studies of N-9’s effect on animal tissue to randomized clinical trials that measured the effect of various N-9 vaginal contraceptive products in preventing HIV transmission among women. Although the FDA did not conduct or fund research on N-9, the agency reviewed research on N-9 vaginal contraceptive products as part of its monograph process. The agency convened an advisory committee meeting in 1996 to review available research on the safety and effectiveness of N-9 as both a contraceptive and a microbicide. This meeting included presentations on a wide variety of published and unpublished research, including laboratory studies of N-9 and clinical studies on N-9 vaginal contraceptive products. The meeting also included discussion about the implications of this research and identified guidelines for the design of future studies to address concerns about dosage and formulation differences in the research available at the time, among other things. The results of the research on N-9 that federal agencies conducted, funded, and reviewed during the 1990s were inconsistent. For example, among the studies compiled in CDC’s bibliography on microbicide research, some found that the use of N-9 vaginal contraceptives reduced the incidence of HIV as well as other STDs, while other studies indicated that frequent use of N-9 vaginal contraceptives may have irritated subjects’ vaginal tissue, which may result in subjects being more susceptible to HIV infection. Throughout the 1990s, various reviews in clinical journals also characterized the results of research on N-9 as inconsistent. For example, a 1995 review in the journal AIDS noted the existence of “substantially different opinions” about the safety and effectiveness of N-9 when used for HIV prevention. Similarly, a 1999 commentary in the American Journal of Public Health noted that epidemiologic studies on N-9 were conflicting. During the 1990s, reviewers of studies involving N-9 observed that several factors may have accounted for the inconsistency of the research results. For example, studies varied in terms of the dosage of N-9 and the chemical formulation of the contraceptive product used. Additionally, the different populations studied may have affected the outcomes of the research. For example, some studies were based on the experience of sex workers, who used N-9 vaginal contraceptive products with relatively higher frequency than the populations in other studies. Studies were also not always comparable to the extent that they varied in their sample sizes. In 2000, after the preliminary results of a major clinical study—known as the COL-1492 study—were reported, CDC and NIH stopped conducting and funding research on N-9 as a microbicide out of concern for participants’ safety. Compared to the results of earlier studies, the preliminary results of the COL-1492 study suggested more strongly that N-9 did not prevent HIV infection and, in addition, that N-9 may increase the risk of infection among frequent users. The COL-1492 study compared the use of an N-9 vaginal contraceptive gel called COL-1492 to a vaginal moisturizer without N-9 among 892 sex workers in Benin, Cote d’Ivoire, South Africa, and Thailand. The preliminary results of this study indicated that the incidence of HIV infection among users of the N-9 vaginal contraceptive gel was 48 percent higher than among users of the moisturizer without N-9. Moreover, the study showed there was little effect of N-9 vaginal contraceptive use on the incidence of certain other STD infections, such as gonorrhea and chlamydia. After the preliminary results of the COL-1492 study became available, officials at CDC and NIH decided to discontinue researching N-9 as a possible microbicide for HIV. Although federal agencies stopped conducting and funding research on N-9 as a potential microbicide in 2000, FDA continued to review research on the safety of N-9 as part of the agency’s regulation of vaginal contraceptive products under the monograph process. During this review, FDA considered, among other things, the recommendations of two key public health reports, published in 2002 by CDC and by the World Health Organization (WHO) in collaboration with the CONRAD Program. The reports recommended that N-9 not be used to prevent HIV transmission and warned that frequent use of N-9 vaginal contraceptive products may cause genital lesions, which may increase the risk of HIV infection in persons at high risk for HIV. The reports also recommended that N-9 condoms not be promoted because there was no published scientific evidence that N-9-lubricated condoms provide any additional protection against STDs compared with other condoms. The reports also recommended that N-9 contraceptive products, including both N-9 condoms and vaginal contraceptive products, should not be used during anal intercourse. The WHO/CONRAD report concluded that for women at low risk for HIV infection, the use of N-9 vaginal contraceptives remained a viable option. Based on its review of this and other information, including the published results of the COL-1492 study, FDA determined that the use of N-9 vaginal contraceptive products may pose a potential health hazard to consumers and proposed new warning labels for N-9 vaginal contraceptive products in January 2003. Specifically, FDA proposed adding warning labels that indicate that vaginal contraceptive products with N-9 do not protect against HIV or other STDs and that frequent use, such as more than once a day, of N-9 can increase vaginal irritation, which may increase the risk of contracting HIV from infected partners. The proposed warnings also indicate that the labeled products are for vaginal use only. As of March 2005, FDA was in the process of finalizing the rule for new warning labels for vaginal contraceptive products containing N-9. According to FDA officials, a draft of the final rule had been completed, and the rule had begun the clearance process within HHS. Officials told us that they expected the clearance process to be completed by September 2005, after which the final rule would be published. According to FDA officials, the rule-making process used to establish new warning labels typically takes more than 2 years. As part of the process for establishing new warning labels for N-9 vaginal contraceptive products, FDA reviewed more than 150 comments submitted in response to the proposed warning labels. These comments ranged from concerns that the proposed language was not strong or specific enough to comments indicating that FDA had gone too far in its proposed warning. FDA officials also stated that 10 specific issues brought up in the public comments on the proposed warning labels required extensive review, including comments that the labels should specifically warn against using vaginal contraceptive products for anal intercourse and concerns that the proposed warning labels might discourage women who are at low risk for HIV from using N-9 as a contraceptive. FDA officials told us they also plan to issue guidance and proposed new warning labels for condoms—including warnings for N-9 condoms. They said they expect a draft to be issued for public comment in 2005. These officials noted that they considered new warning labels for N-9 condoms in the context of a larger initiative started in 2001 to review condom labeling for medical accuracy with respect to the overall effectiveness of condoms against STDs. FDA officials told us that officials from CDRH and CDER collaborated to ensure that the new labeling proposals for N-9 condoms and N-9 vaginal contraceptive products will be consistent. As of March 2005, an HHS official told us that HHS had completed its review of the draft guidance and labels. After this review, FDA officials told us the draft would be sent to the Office of Management and Budget for review before being issued for public comment. FDA officials said that FDA expects to be able to issue the draft guidance and condom warning labels by May 2005. Two manufacturers of N-9 contraceptive products that we interviewed have researched the safety of N-9. Specifically, they reviewed the research literature on the safety of N-9 in order to prepare comments in response to the language of FDA’s proposed warning labels for vaginal contraceptive products. For example, one manufacturer concluded that FDA’s proposed labeling—that implied a link between the use of N-9 vaginal contraceptive products and an increased risk of HIV transmission—was not sufficiently supported by the scientific literature. However, no manufacturers we interviewed have conducted research on N-9’s effectiveness as a microbicide. Manufacturers would only be required to conduct such research if they were to seek approval from FDA to use N-9 vaginal contraceptives for a new indication—such as HIV prevention. However, FDA officials reported that no manufacturers sought approval for a new indication for N-9. The information CDC and FDA provided the public about the use of N-9 as a microbicide has been, at times, inconsistent. In the early 1990s, CDC cautioned that there was insufficient information to conclude that N-9 may prevent HIV transmission. By 1998, in response to new research, CDC informed the public that N-9 vaginal contraceptive products did not prevent HIV. During the same period, FDA also cautioned that N-9 had not been proven to prevent HIV transmission, but in 1999, a brochure on its Web site stated that N-9, along with a condom, may be used to prevent HIV transmission. By 2000, CDC stated that N-9 may actually increase the risk of contracting HIV when used frequently. FDA, in contrast, did not revise the brochure on its Web site that stated some experts believe N-9 may prevent HIV and suggested using N-9 along with a condom. Some manufacturers we interviewed have also taken steps to inform the public about N-9 and HIV, while others have not. (See app. I for a timeline of selected events and publications related to N-9’s potential use as a microbicide.) In the early 1990s, based on the information that was available at the time, CDC cautioned that there was insufficient information to conclude that N-9 may prevent HIV transmission. According to CDC’s 1993 STD treatment guidelines, “protection of women against HIV infection should not be assumed from the use of vaginal spermicides, vaginal sponges, or diaphragms.” This document also stated, “No data exist to indicate that condoms lubricated with spermicides are more effective than other lubricated condoms in protecting against the transmission of HIV infection….” This document recommended the use of condoms, with or without a spermicide in order to protect against STDs, including HIV. Similarly, an article in a 1993 issue of CDC’s MMWR cautioned that there was no evidence that N-9 prevents HIV transmission. According to this issue of MMWR, “No reports indicate that nonoxynol-9 used alone without condoms is effective for preventing sexual transmission of HIV.” This document also repeated the recommendation to use condoms with or without a spermicide. By 1998, in response to new research, CDC informed the public that N-9 should not be used as a microbicide because it does not protect against HIV and revised its STD treatment guidelines to state that “vaginal spermicides offer no protection against HIV infection, and spermicides are not recommended for HIV prevention.” At this time, CDC did not revise its recommendation to use condoms with or without spermicide. FDA’s educational publications during the 1990s also cautioned that N-9 had not been proven to prevent HIV transmission, but in some cases, the agency suggested that N-9, along with a condom, may be used to prevent HIV transmission. For example, a 1990 article published in the magazine FDA Consumer stated, “Although it has not been scientifically proven, it is possible that Nonoxynol-9 may reduce the risk of transmission of the AIDS virus during intercourse as well. Using a spermicide along with a latex condom is therefore advisable, and is an added precaution in case the condom breaks…. Some experts think that even if a condom with spermicide is used, additional spermicide in the form of a jelly, cream or foam should be added.” In 1998, an FDA Consumer article stated that N-9 may reduce the risk of transmitting certain STDs, but cautioned that it has not been proven to prevent sexual transmission of HIV. Another 1998 FDA Consumer article stated that spermicides alone do not give adequate protection against HIV. However, in 1999, FDA indicated to the public that N-9 may protect them against HIV transmission. An FDA brochure posted to the Web site and titled Condoms and Sexually Transmitted Diseases…Especially AIDS stated, “Some experts believe nonoxynol-9 may kill the AIDS virus during intercourse, too. So you might want to use a spermicide along with a latex condom as an added precaution….” In response to the preliminary results of the COL-1492 study that were released at the 2000 International AIDS Conference, CDC revised its earlier position on N-9. CDC had previously cautioned that N-9 used alone without a condom offered no protection against HIV infection and was not recommended for HIV prevention. However, by 2000 CDC’s educational publications had included the statement that N-9 may increase the risk of transmission when used frequently. In an August 2000 letter to health care providers and public health personnel, CDC reported that the preliminary results of the COL-1492 study demonstrated that N-9 did not protect against HIV infection and may have caused more transmission. This letter also stated that N-9 should not be recommended as an effective means of HIV prevention and that the use of N-9 for HIV prevention may be harmful to certain users. This warning was also published in an August 2000 issue of MMWR. Similarly, in 2002 when CDC revised its STD treatment guidelines, it included information indicating that spermicides containing N-9 were not effective in preventing HIV infection and that frequent use had been associated with genital lesions, which may be associated with an increased risk of HIV transmission. These revised STD treatment guidelines further stated that condoms lubricated with spermicides are no more effective than other lubricated condoms in preventing HIV transmission, and also stated that “purchase of any additional condoms lubricated with the spermicide N-9 is not recommended.” This information also appeared in an article in a May 2002 issue of MMWR. While CDC was informing the public that N-9 was not effective in preventing HIV and that frequent use of N-9 may increase the risk of HIV transmission, the public would have obtained different information from FDA. An FDA official told us that the agency has not disseminated any new educational materials related to N-9 and HIV transmission since 2000. However, FDA left its brochurewhich stated that some experts believe that N-9 may prevent HIV transmissionon its Web site until this information was deleted in September 2003 when FDA officials realized the information in the brochure on the Web site was inconsistent with the proposed warning labels for N-9 vaginal contraceptive products. According to one FDA official, documents on the agency’s Web site were updated in an “ad hoc” manner, rather than through an official process. The three largest condom manufacturers have taken steps to inform the public about N-9 and HIV. In particular, one condom manufacturer has taken multiple steps to inform its consumers that N-9 does not prevent HIV transmission and may increase some users’ risk of contracting HIV. This large manufacturer of condoms has added warning labels to N-9 condom packaging that indicate that N-9 is not effective in protecting against HIV. This manufacturer has also published pamphlets and used similar language on its Web site to explain to consumers the risks associated with N-9. In addition, two other large manufacturers of condoms added warnings to their Web sites about the use of N-9. In contrast, officials from major manufacturers of vaginal contraceptive products that we interviewed told us they have not disseminated such information. One of these manufacturers reported that its review of research on N-9 suggested that the link between the use of N-9 and an increased risk of HIV infection was speculation. In recent years, there have been several changes in the production, distribution, and promotion of N-9 condoms. In January 2004, the condom manufacturer SSL International announced that it was halting production of its Durex brand condoms that are lubricated with N-9 because of a decrease in sales to public health agencies and because of an anticipated decrease in retail sales. SSL International representatives attributed this decrease in sales to safety concerns raised by the 2002 release of the WHO/CONRAD report. Another large manufacturer of condoms reported that the percentage of N-9 condoms sold on the retail market declined from 2000 to 2003. Like SSL International, PPFA and a leading distributor—Mayer Laboratories—have also stopped manufacturing and distributing N-9 condoms. A representative from PPFA told us that the organization stopped manufacturing N-9 condoms in June 2002 because of safety concerns based on published scientific studies indicating that N-9 does not protect against HIV and that frequent N-9 use may actually increase HIV transmission. In addition, a representative from PPFA stated that its decision to halt production of N-9 condoms was influenced by the release of the conclusions of the WHO/CONRAD report and the outcome of a meeting with public health entities organized by the Global Campaign for Microbicides. Another public health organization, the Gay Men’s Health Clinic in New York, has also begun to recommend that clients not use N-9 condoms. As of early in 2003, a distributor—Mayer Laboratories—stopped distributing N-9 condoms. Information from Mayer Laboratories stated that this decision was based on a concern about the safety of N-9 condoms. CDC’s and NIH’s efforts to research N-9’s potential use as a microbicide ended in 2000, when the preliminary results of a major clinical trial indicated that N-9 may actually increase the risk of contracting HIV. CDC has warned that N-9 may increase the risk of HIV transmission when used frequently, and some manufacturers of N-9 condoms have taken steps to either add their own warning labels or remove their N-9 condoms from the market, while other manufacturers have not taken such steps. FDA has proposed requiring new warning labels that indicate that N-9 vaginal contraceptive products do not protect against HIV or other STDs and that frequent use, such as more than once a day, may increase the risk of contracting HIV. FDA is also developing proposed warning labels for N-9 condoms. While FDA expects to issue the final rule for the new warning labels for vaginal contraceptive products by September 2005, it has not yet issued proposed warning labels for N-9 condoms, and it has not indicated a target date to issue the final warning labels for N-9 condoms. Since FDA is still in the process of completing warning label changes for N-9 vaginal contraceptive products and condoms, the public may be left in doubt about the appropriate uses of these products until FDA finalizes these warnings. Further, the public may be at risk if the products are used inappropriately. HHS provided written comments on a draft of this report. (See app. II). In its written comments, HHS stated that the final sentence in the draft report—that said the public may be at risk until FDA finalizes the warning labels for N-9 vaginal contraceptive products and N-9 condoms—may unintentionally undermine efforts to inform the public of the protection provided by condoms. HHS suggested we modify this to say that consumers may be left in doubt about the appropriate uses of these products. We have revised the conclusion to acknowledge that until FDA finalizes its warning labels, consumers may be left in doubt about the appropriate uses of these products. However, the conclusion also states that the public may be at risk if the products are used inappropriately. In its written comments, HHS stated that the draft did not indicate that FDA had never permitted condom labeling to claim that N-9 provides any additional protection against HIV or other STDs. To ensure clarity on this issue, we have added this statement to the report. HHS’s written comments also stated that it is important to make clear that the barrier features of condoms provide the primary protection against STDs and the primary contraceptive protection. While this is an important fact in educating consumers about methods to protect themselves against STDs, the objectives of this report were focused on N-9 and its potential as a microbicide. HHS’s written comments also stated that FDA’s primary means of public health communication is through product labeling oversight and that FDA has, on occasion, provided supplementary information through consumer outreach efforts. The draft report noted the role FDA has in labeling oversight and described FDA’s proposed warning labels for N-9 vaginal contraceptive products and its efforts to develop proposed warning labels for N-9 condoms. The draft report also described the information FDA provided to the public through a brochure that it posted to its Web site and through FDA Consumer magazine articles. HHS’s written comments also stated that the supplementary statements FDA provided to the public through consumer outreach efforts always acknowledged the scientific uncertainty concerning the effectiveness of N-9 as a protection against STDs. Examples of FDA’s acknowledgement of scientific uncertainty were provided in the draft report. HHS also commented that the timeline in appendix I should begin with the 1988 CDC brochure Understanding AIDS, which advised that N-9, when used with a condom, might provide additional protection against HIV. We mentioned this brochure in the introduction to the draft report when we stated that in the mid-1980s N-9 showed promise as a potential microbicide for STDs, including HIV. However, as we stated in the scope and methodology section of the draft report, we focused our review on efforts to research and provide public information on N-9 and HIV from 1990 to the present because concerns about the safety of N-9 in preventing HIV first began to surface in about 1990. Further, HHS commented that the timeline should make clear that the first indication that N-9 presented added risks did not emerge until 2000 (the COL-1492 study). However, this study was not the first indication that N-9 presented added risks, and the draft report discussed earlier concerns. HHS’s written comments also made a number of other suggestions to clarify the draft report, which we incorporated. First, HHS suggested that we clearly indicate in the report when we are discussing vaginal contraceptive products containing N-9, condoms with N-9, or both. We have reviewed the report for clarity and made changes where necessary. Second, HHS’s comments stated that much of the research discussed in the report was restricted to vaginal contraceptive products and that these studies did not involve N-9 condoms. We have clarified this point in the report. Finally, HHS’s comments stated that the 1999 FDA brochure Condoms and Sexually Transmitted Diseases . . . Especially AIDS was an Internet posting of a brochure initially issued in 1990. We clarified the text of our report to note that the Web site posting was of a brochure originally issued in 1990 and that the document stated the information was current as of December 2, 1999. We also added the 1990 brochure to the timeline in appendix I. HHS included several other comments. First, HHS stated that we should be clear that N-9 condoms are regulated as medical devices not through the monograph process. This information was discussed in the background section of the draft report. Second, HHS’s written comments stated that the report should recognize that some manufacturers stopped selling condoms with N-9 because of economic considerations and not safety concerns. This information was included in the draft report and we noted further that one manufacturer attributed the decrease in sales of N-9 condoms to the safety concerns raised by the 2002 release of the WHO/CONDRAD report. Third, HHS raised concerns that the draft report had not explained the significance of the actions of manufacturers. This information was included in the draft report. Finally, HHS’s comments said we should note that the report on the COL-1492 study was published in 2002 and the information available prior to that time could be considered only preliminary. This information was also reflected in the draft report and in the timeline in appendix I. HHS’s comments are reprinted in appendix II. HHS also provided technical comments, which we incorporated into the report as appropriate. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. We will then send copies to others who are interested and make copies available to others who request them. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7114. Another contact and key contributors are listed in appendix III. 1990: An FDA brochure, Condoms and STDs. . . Especially AIDS, stated that “Some experts believe nonoxynol-9 may kill the AIDS virus during intercourse, too. So you might want to use a spermicide along with a latex condom as an added precaution. . . .” s 0 9 9 1 1998: An FDA Consumer magazine article, “On the Teen Scene: Preventing STDs,” stated that “... spermicides alone ... do not give adequate protection against HIV and other STDs.” 1999: An FDA brochure on its Web site and titled Condoms and STDs... Especially AIDS continued to include the statement “Some experts believe nonoxynol-9 may kill the AIDS virus during intercourse, too. So you might want to use a spermicide along with a latex condom as an added precaution....” 2002: CDC’s 2002 STD treatment guidelines stated that “Recent evidence has indicated that vaginal spermicides containing nonoxynol-9 (N- 9) are not effective in preventing . . . HIV infection. Thus, spermicides alone are not recommended for STD/HIV prevention. Frequent use of sperm- icides containing N-9 . . . may be associated with an increased risk of HIV transmission . . . . Purchase of any additional condoms lubricated with the spermicide N-9 is not recommended. . . .” 1993: “CDC Update: Barrier Protection Against HIV Infection and Other Sexually Transmitted Diseases” in MMWR stated that “No reports indicate that nonoxynol-9 used alone without condoms is effective for preventing sexual transmission of HIV.. . . No data exist to indicate that condoms lubricated with spermicides are more effective than other lubricated condoms in protecting against the transmission of HIV infection.... Therefore, latex condoms with or without spermicides are recommended.” s 0 0 0 2 2000: Preliminary results of a major clinical study among high-risk female sex workers were presented at the International AIDS conference in Durban, South Africa. The results indicate that N-9 is not effective as a microbicide against HIV and may increase certain users’ risk of contracting the virus. 2002: WHO and CONRAD published the summary report of their consultation on the available literature regarding N-9’s safety and effectiveness as a spermicide and a microbicide. 2002: PPFA halted production and distribution of N-9 condoms. 2003: Mayer Laboratories halted distribution of N-9 condoms. 2000: NIH and CDC reported that the agencies halted all studies actively pursuing N-9’s use as a microbicide between 2000 and 2001. 1993: CDC’s 1993 STD treatment guidelines stated that “protection of women against HIV infection should not be assumed from the use of vaginal spermicides, vaginal sponges, or diaphragms.” 2000: A CDC letter issued to health care providers and public health personnel stated that because N-9 has been shown to be ineffective against HIV and may increase HIV risk among certain user groups, N-9 should not be recommended for HIV prevention. 2003: FDA proposed warning labels for vaginal contraceptive drug product packaging. The proposed warning stated that vaginal contra- ceptive products with N-9 do not protect against HIV or other STDs and that frequent use of N-9 can increase vaginal irritation, which may increase the risk of contracting HIV or other STDs. The warnings also indicated that the labeled products were for vaginal use only. 1997: CDC publication What We Know About Nonoxynol-9 for HIV and STD Prevention stated that “CDC does not recommend using spermicide alone to prevent HIV infection.” 2001: WHO, in collaboration with CONRAD, convened a meeting to review the available literature regarding N-9’s safety and effect- iveness as a spermicide and a microbicide. 2004: SSL International halted production of its Durex brand condoms that were lubricated with N-9 because of a decrease in sales to public health outlets and a projected decrease in retail sales. 1998: CDC’s 1998 STD treatment guidelines stated that “vaginal spermicides offer no protection against HIV infection, and spermicides are not recommended for HIV prevention . . . the consistent use of condoms, with or without spermicidal lubricant or vaginal application of spermicide is recommended.” 2002: The final results of the presentation at the International AIDS conference held in Durban, South Africa, were published in The Lancet. 1998: An FDA Consumer magazine article, “Condoms: Barriers to Bad News,” stated that “The spermicide nonoxynol-9, used in some condoms, has been shown to be effective as a contraceptive, and may reduce the risk of transmitting certain STDs. But the spermicide has not been proven to prevent sexual transmission of HIV.” 2002: CDC published a report in MMWR stating, “Sexually active women should consider their individual HIV/STD infection risk when choosing a method of contraception. Providers of family planning services should inform women at risk for HIV/STDs that N-9 contraceptives do not protect against these infections.” In addition to the person named above, Kelly DeMots, Krister Friday, Mary Giffin, and Mary Reich made key contributions to this report.
Preventing the transmission of HIV, the virus that causes AIDS, is an important public health challenge. Researchers have sought to develop a microbicide--a substance to help users protect themselves against HIV. In the mid-1980s, researchers found that Nonoxynol-9 (N-9), a spermicide found in various contraceptive products, showed potential as a microbicide. However, more recent studies raised concerns that N-9 may increase certain users' risk of contracting HIV. GAO was asked to describe federal agencies' and contraceptive product manufacturers' actions related to N-9 and HIV. In this report, GAO reviewed (1) the efforts by federal agencies and manufacturers of contraceptive products to assess the safety of N-9 and its effectiveness as a microbicide for preventing HIV transmission and (2) the information provided to the public about the safety of N-9 and its effectiveness as a microbicide. GAO reviewed journal articles, Federal Register notices, product packaging, educational materials, and other documents. GAO also interviewed officials from the Centers for Disease Control and Prevention (CDC), the Food and Drug Administration (FDA), the National Institutes of Health (NIH), and selected manufacturers of N-9 contraceptive products. Federal agencies have undertaken a variety of efforts to research N-9 as a potential microbicide--including conducting, funding, or reviewing studies on the safety and effectiveness of N-9. In the 1990s, CDC and NIH conducted and funded research on the effectiveness and safety of N-9 as a microbicide to prevent HIV infection. For example, in 1996 CDC and others began a 4-year study on the effectiveness of an N-9 vaginal contraceptive product in preventing the transmission of sexually transmitted diseases, including HIV. The results of the research by the agencies during this period were inconsistent--some research indicated that N-9 reduced the incidence of HIV while other research suggested that frequent use of N-9 may increase the risk of contracting the virus. Then in 2000, the preliminary results of a major clinical study suggested more strongly that N-9 vaginal contraceptive products did not prevent HIV infection and may increase the risk of infection among frequent users. As a result of the study, CDC and NIH stopped conducting and funding research on N-9 as a microbicide out of concern for participants' safety. FDA continued to review available research on the safety of N-9 as part of its regulation of vaginal contraceptive products and, in 2003, proposed new warning labels for N-9 vaginal contraceptive products. As of March 2005, FDA was also in the process of developing a proposal for new warning labels for N-9 condoms. As of that date, FDA had not finalized the new warning labels for N-9 vaginal contraceptive products and had not proposed new warning labels for N-9 condoms. Representatives from two manufacturers of N-9 contraceptive products have reviewed research on N-9's safety for the purpose of commenting on FDA's proposed warning labels. The information CDC and FDA have provided to the public about the use of N-9 as a microbicide has been, at times, inconsistent. In the early 1990s, CDC cautioned that there was insufficient information to conclude that N-9 may prevent HIV transmission. By 1998, in response to new research, the agency informed the public that N-9 vaginal contraceptive products did not prevent HIV. During the same period, FDA also cautioned that N-9 had not been proven to prevent HIV transmission, but in 1999, a brochure posted on its Web site stated that N-9, along with a condom, may be used to prevent HIV transmission. By 2000, CDC had responded to new research findings and had revised its educational publications to state that N-9 may actually increase the risk of contracting HIV when used frequently. In contrast, FDA did not revise the brochure on its Web site that stated that some experts believe N-9 may prevent HIV and suggested using N-9 along with a condom. FDA left this information on its Web site until these statements were deleted in September 2003 when FDA officials realized the information was inconsistent with proposed warning labels. In commenting on a draft of this report, the Department of Health and Human Services (HHS) provided clarification that GAO incorporated where appropriate.
The same speed and accessibility that create the enormous benefits of the computer age can, if not properly controlled, allow individuals and organizations to inexpensively eavesdrop on or interfere with computer operations from remote locations for mischievous or malicious purposes, including fraud or sabotage. We reported in March 2004 that federal agencies continue to show significant weaknesses in computer systems that put critical operations and assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. The increasing sophistication and maliciousness of cybersecurity threats create unique challenges to federal systems and governmentwide cybersecurity efforts. Security experts are observing the rapid evolution of attack technologies and methods. Unsolicited commercial e-mail (spam) has been an annoyance to Internet users for several years. However, over the past few years, this mass-marketing tool has evolved from a mere nuisance to a delivery mechanism for malicious software programs (commonly referred to as malware) that hijack computers, and e-mail that deceives recipients into divulging sensitive information, such as credit card numbers, login IDs, and passwords (phishing). One emerging form of malware, known as spyware, is installed without the user’s knowledge to surreptitiously track and/or transmit data to an unauthorized third party. Security researchers’ and vendors’ 2004 annual security reports reportedly identified phishing and spyware as among the top emerging threats of last year, and they were predicted to increase in 2005. These threats have targeted our government; for instance, in 2004, federal entities such as FDIC, the Federal Bureau of Investigation (FBI), and IRS were used in phishing scams in which their agency names were exploited. Although spam, phishing, and spyware were once viewed as discrete consumer challenges, they are now being blended to create substantial threats to large enterprises, including federal systems. For example, the number of phishing scams that are often spread through spam has significantly increased. Government officials are increasingly concerned about attacks from individuals and groups with malicious intent, such as crime, terrorism, foreign intelligence gathering, and acts of war. According to the FBI, terrorists, transnational criminals, and intelligence services are quickly becoming aware of and using information exploitation tools such as computer viruses, Trojan horses, worms, logic bombs, and eavesdropping sniffers that can destroy, intercept, and degrade the integrity of or deny access to data. As larger amounts of money are transferred through computer systems, as more sensitive economic and commercial information is exchanged electronically, and as the nation’s defense and intelligence communities increasingly rely on commercially available information technology, the likelihood increases that information attacks will threaten vital national interests. Table 1 summarizes the sources of emerging cybersecurity threats. The sophistication and effectiveness of cyberattacks have steadily advanced. These attacks often take advantage of flaws in software code, circumvent signature-based tools that commonly identify and prevent known threats, and use stealthy social engineering techniques designed to trick the unsuspecting user into divulging sensitive information. These attacks are becoming increasingly automated with the use of botnets— compromised computers that can be controlled remotely by attackers to automatically launch attacks. Bots have become one of the key automation tools that speed the location and infection of vulnerable systems. Several laws have been implemented to improve the nation’s cybersecurity posture. The Federal Information Security Management Act of 2002 (FISMA) requires agencies to implement an entitywide risk-based approach to protecting federal systems and information against cyberattack. Other laws, such as the Homeland Security Act and the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act of 2001 (USA PATRIOT Act), among others, also address actions that the government can take to increase national cybersecurity awareness and preparedness, including the roles and responsibilities of key agencies such as DHS. Additionally, recent legislation, both enacted and pending, that specifically addresses spam, phishing, and spyware has included civil and criminal penalties to deter cybercrime. FISMA establishes clear criteria to improve federal agencies’ cybersecurity programs. Enacted into law on December 17, 2002, as title III of the E- Government Act of 2002, FISMA requires federal agencies to protect and maintain the confidentiality, integrity, and availability of their information and information systems. It also assigns specific information security responsibilities to the Office of Management and Budget (OMB), the Department of Commerce’s National Institute of Standards and Technology (NIST), agency heads, chief information officers (CIO), and inspectors general (IG). For OMB, these responsibilities include developing and overseeing the implementation of policies, principles, standards, and guidelines on information security, as well as reviewing, at least annually, and approving or disapproving, agency information security programs. FISMA required each agency including agencies with national security systems, to develop, document, and implement agencywide information security programs to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Specifically, this program is to include periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; risk-based policies and procedures that cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system; subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security incidents; and plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. FISMA requires each agency to report annually to OMB, selected congressional committees, and the Comptroller General on the adequacy of information security policies, procedures, and practices, and on compliance with FISMA’s requirements. FISMA also charges the Director of OMB with ensuring the operation of a central federal information security incident center with responsibility for issuing guidance to agencies on detecting and responding to incidents. Other responsibilities include compiling and analyzing information about incidents and informing agencies about current and potential information security threats. Prior to FISMA, the CIO Council (then chaired by OMB’s Deputy Director for Management) issued a memorandum to all agency CIOs instructing agencies to follow specific practices for appropriate coordination and interaction with the Federal Computer Incident Response Capability (FedCIRC). OMB’s statutory requirement supported FedCIRC, and OMB received quarterly reports from FedCIRC on the federal government’s status on information technology security incidents. Following the establishment of DHS and in an effort to implement action items described in the National Strategy to Secure Cyberspace, FedCIRC was dissolved as a separate entity and its functions absorbed into the United States Computer Emergency Readiness Team (US-CERT), which was created in September 2003. US-CERT was established to aggregate and disseminate cybersecurity information to improve warning about and response to incidents, increase coordination of response information, reduce vulnerabilities, and enhance prevention and protection. US-CERT analyzes incidents reported by federal civilian agencies and coordinates with national security incident response centers in responding to incidents on both classified and unclassified systems. US-CERT also provides a service through its National Cyber Alert System to identify, analyze, prioritize, and disseminate information on emerging vulnerabilities and threats. On August 23, 2004, OMB issued FISMA reporting instructions to the agencies. This guidance reinforces the requirement for agencies to test and evaluate their security controls annually, at a minimum, to promote a continuous process of assessing risk and ensuring that security controls maintain risk at an acceptable level. Further, agencies’ 2004 FISMA reporting guidance requires them to report on their incident-detection and incident-handling procedures, including methods used to mitigate information technology security risk and internal and external incident- reporting procedures. OMB also issued a memorandum to the agencies on personal use policies and “file sharing” technology. In this guidance, OMB directs agencies to establish or update their personal use policies and to train employees on these policies to “ensure that all individuals are appropriately trained in how to fulfill their security responsibilities.” FISMA also requires NIST to establish standards, guidelines, and requirements to help agencies improve the posture of their information security programs. NIST has issued several publications relevant to assisting agencies in protecting their systems against emerging cybersecurity threats. For instance, Special Publication 800-61, Computer Security Incident Handling Guide, advises agencies to establish an incident-response capability that includes establishing guidelines for communicating with outside parties regarding incidents, including law enforcement agencies, and also discusses handling specific types of incidents, including malicious code and unauthorized access. Additionally, NIST Special Publication 800-68 (Draft), Guidance for Securing Microsoft Windows XP Systems for IT Professionals: A NIST Security Configuration Checklist, describes configuration recommendations that focus on deterring malware, countermeasures against security threats with malicious payload, and specific recommendations for addressing spyware. NIST has also issued guidance on various controls that agencies can implement, such as Guidelines on Electronic Mail Security and Guidelines on Securing Public Web Servers. The electronic mail security guide discusses various practices that should be implemented to ensure the security of a mail server and the supporting network infrastructure, such as organizationwide information systems security policy; configuration/change control and management; risk assessment and management; standardized software configurations that satisfy the information security awareness and training; contingency planning, continuity of operations, and disaster recovery certification and accreditation. In its publication on securing public Web servers, NIST discusses methods that organizations can take to secure their Web servers. This includes standard methods such as hardening servers, patching systems, testing systems, maintaining and reviewing logs, backing up, and developing a secure network. It also includes selecting what types of active content technologies to use (e.g., JavaScript and ActiveX), what content to show, how to limit Web bots (i.e., bots that scan Web pages for search engines), and discusses authentication and cryptographic applications. The publication also notes the importance of analyzing logs, in order to notice suspicious behavior and intrusion attempts. Further, NIST is currently drafting a guide on malware that includes a taxonomy of malware, incident prevention, incident response, and future malicious threats to assist agencies in improving the security of their systems and networks from current and future malware threats. NIST Special Publication 800-53, Recommended Security Controls for Federal Information Systems, emphasizes the importance of technical, managerial, and operational security controls to protect the confidentiality, integrity, and availability of a system and its information. The security controls defined in the publication were recommended for implementation in the context of a well-defined information security program, which should include periodic risk assessments and policies and procedures based on risk assessments. For a comprehensive listing of NIST publications that can be used to protect agency networks and systems against emerging threats, see appendix I. Additionally, agencies are required by various other laws to protect specific types of information, such as programmatic, personal, law enforcement, and national security data. For example, agencies are required to protect employee and personal data under the Privacy Act of 1974, and the IRS is mandated to protect individuals’ personal tax records. Further, security- sensitive transportation and other critical infrastructure information is required to be protected under a variety of laws. If this information is made available to or accessed by an attacker, agencies may be failing to implement the necessary management controls to protect against unauthorized access. Securing federal systems and the information that they process and store is essential to ensuring that critical operations and missions are accomplished. The Homeland Security Act of 2002 established key roles in cybersecurity for DHS. In 2002 the Homeland Security Act created DHS, which was given responsibility for developing a national plan; recommending measures to protect the critical infrastructure; and collecting, analyzing, and disseminating information to government and private-sector entities to deter, prevent, and respond to terrorist attacks. The act also increased penalties for fraud and related criminal activity performed in connection with computers. Additionally, the act charged DHS with providing state and local government entities and, upon request, private entities that own or operate critical infrastructure, with analysis and warnings concerning vulnerabilities and threats to critical crisis management support in response to threats or attacks on critical technical assistance with respect to recovery plans to respond to major failures of critical information systems. The President’s National Strategy to Secure Cyberspace was issued on February 14, 2003, to identify priorities, actions, and responsibilities for the federal government as well as for state and local governments and the private sector, with specific recommendations for action by DHS. This strategy established priorities for improving analysis awareness, threat reduction, and federal agency cybersecurity. It also identified the reduction and remediation of software vulnerabilities as a critical area of focus. Specifically, the strategy identifies the need for a better-defined approach on disclosing vulnerabilities, to reduce their usefulness to hackers in launching an attack; creating common test beds for applications widely used among federal establishing best practices for vulnerability remediation in areas such as training, use of automated tools, and patch management implementation processes; enhanced awareness and analysis for identifying and remedying cyber vulnerabilities and attacks; and improved national response to cyber incidents and reduced potential damage from such events. Homeland Security Presidential Directive 7 defined responsibilities for DHS, sector-specific agencies, and other departments and agencies to identify, prioritize, and coordinate the protection of critical infrastructure to prevent, deter, and mitigate the effects of attacks. The Secretary of Homeland Security is assigned several responsibilities, including establishing uniform policies, approaches, guidelines, and methodologies for integrating federal infrastructure protection and risk management activities within and across sectors. Homeland Security Presidential Directive 5 instructed the Secretary of Homeland Security to create a new National Response Plan; this plan, completed in December 2004, was designed to align federal coordination structures, capabilities, and resources into a unified, national approach toward incident management. One component of the plan is the Incident Annexes, which address situations requiring specialized application of the plan, such as cyber, biological, and terrorism incidents. Specifically, the Cyber Incident Response Annex established procedures for a multidisciplinary, comprehensive approach to prepare for, remediate, and recover from cyber events of national significance that impact critical national processes and the economy. Key agencies given responsibilities for securing cyberspace and coordinating incident response include DHS and the Departments of Defense and Justice. The USA PATRIOT Act increased the Secret Service’s role in investigating fraud and related activity in connection with computers. In addition, it authorized the Director of the Secret Service to establish nationwide electronic crimes task forces to assist law enforcement, the private sector, and academia in detecting and suppressing computer-based crime; increased the statutory penalties for the manufacturing, possession, dealing, and passing of counterfeit U.S. or foreign obligations; and allowed enforcement action to be taken to protect our financial payment systems while combating transnational financial crimes directed by terrorists or other criminals. The growing attention of the significant problems caused by spam, phishing, and spyware has resulted in legislation that imposes civil and criminal penalties to deter cybercrime. The Controlling the Assault of Non- Solicited Pornography and Marketing (CAN-SPAM) Act of 2003, the first federal law addressing the transmission of commercial electronic messages, went into effect on January 1, 2004. This act did not ban unsolicited commercial e-mail, but, rather, established parameters for distributing it, such as requiring that commercial e-mail be identified as advertisement and include the sender’s valid physical postal address. It prohibits, among other actions, the use of deceptive subject headings; the use of materially false, misleading, or deceptive information in the header or text of the e-mail; transmitting e-mail to accounts obtained through improper or illegal sending e-mail through computers accessed without authorization. The act also required labels on sexually oriented material and an opt-out mechanism that prohibits the sender from transmitting commercial e-mail to the recipient more than 10 days after the recipient opts out. Further, it established civil and criminal penalties, including fines of up to $6 million and a maximum prison term of 5 years. This act was intended to deter spammers from distributing unsolicited commercial e-mail but, according to media sources, has received criticism for its lack of enforceability. The following list highlights civil and criminal prosecutions at the federal and state level under the CAN-SPAM Act in 2004: On March 20, four major Internet service providers filed the first lawsuits under the CAN-SPAM Act. In April, Michigan conducted the first criminal prosecution under the CAN-SPAM Act, and charged four men with sending out hundreds of thousands of fraudulent, unsolicited commercial e-mail messages advertising a weight-loss product. In September, the “wireless spammer” became the first person convicted under the CAN-SPAM Act. States have also developed their own legislation to combat these threats. According to the National Conference of State Legislatures, 36 states had enacted legislation regulating unsolicited commercial e-mail. However, some or all of their provisions may be pre-empted by the CAN-SPAM Act. The Fair and Accurate Credit Transaction Act of 2003 provided additional provisions to protect consumers against forms of identity theft, which includes phishing. However, increased awareness and interest among legislators and growing recognition that current law may not sufficiently respond to phishing and spyware have propelled the introduction of phishing and spyware bills during the 109th Congress: The SPY ACT (Securely Protect Yourself Against Cyber Trespass), H.R. 29, introduced by Representative Mary Bono on January 4, 2005, details specific actions that would be deemed unlawful if performed by anyone who is not the owner or authorized user of a protected computer, such as taking control of the computer, manipulating the computer’s settings, installing and deleting programs, collecting personally identifiable information through keyloggers, and others. It also would prohibit the collection of certain information without notice and consent from the user, and would require software to be easy to uninstall. The Federal Trade Commission would be charged with enforcing the act with civil penalties set for various violations. This bill was originally introduced during the last Congress and was approved by the House Committee on Energy and Commerce. The I-SPY (Internet-Spyware) Prevention Act, H.R.744, introduced by Representative Bob Goodlatte on February 10, 2005, would deem as a criminal offense any intentional unauthorized access, including access exceeding authorization, of a computer that causes a computer program or code to be copied onto the computer for advancement of another federal criminal offense or intentional obtainment or transmission of “personal information” with the intent of injuring or defrauding a person or damaging a computer. It would also incriminate the intentional impairment of the security protections of a computer. The bill imposes prison terms of up to 5 years and also authorizes $10 million to the Department of Justice to combat spyware and phishing scams. The bill was referred to the House Committee on the Judiciary. The Anti-phishing Act of 2005, S. 472, introduced on February 28, 2005, by Senator Patrick Leahy, would impose penalties for phishing and pharming. The bill would prohibit the creation or procurement of a Web site or e-mail message that falsifies its legitimacy and attempts to trick the user into divulging personal information with the intent to commit a crime involving fraud or identify theft. This bill would allow prosecutors to seek fines of up to $250,000 and jail terms of up to 5 years. The bill has been referred to the Judiciary Committee prior to action by the full Senate. The Anti-phishing Act of 2005, H.R. 1099, introduced on March 3, 2005, by Representative Darlene Hooley, would criminalize phishing scams and certain other federal or state crimes of Internet-related fraud or identity theft, including the creation of a Web site that fraudulently represents itself as a legitimate online business. The bill includes criminal penalties of fines and/or up to 5 years of imprisonment. The bill was referred to the House Committee on the Judiciary. The Software Principles Yielding Better Levels of Consumer Knowledge (SPY BLOCK) Act, S. 687, introduced on March 20, 2005, by Senator Conrad Burns, would prohibit a variety of surreptitious practices that result in spyware and other unwanted software being placed on consumers’ computers. The bill also includes criminal penalties for certain unauthorized computer-related activities, such as fines and/or up to 5 years of imprisonment for the illicit indirect use of protected computers. The bill was referred to the Senate Committee on Commerce, Science, and Transportation. Our objectives were to determine (1) the potential risks to federal information systems from emerging cybersecurity threats such as spam, phishing, and spyware; (2) the 24 Chief Financial Officers (CFO) Act agencies’ reported perceptions of these risks and their actions and plans to mitigate them; (3) government and private-sector efforts to address these emerging cybersecurity threats on a national level, including actions to increase consumer awareness; and (4) governmentwide challenges to protecting federal information systems from these emerging cybersecurity threats. To determine the potential risks to federal systems from emerging cybersecurity threats, we first determined effective mitigation practices by conducting an extensive search of professional information technology security literature. In addition, we met with vendors of commercial antispam, antiphishing, and antispyware tools to discuss and examine their products’ functions and capabilities. We also reviewed research studies and reports about these emerging cybersecurity threats. Further, with the assistance of our chief information officer (CIO), we conducted a spyware test to determine specific risks of spyware, including the types of Web sites that distribute spyware, the types of spyware that can be installed, and the types of sensitive information that can be relayed to a third party. For our spyware test, we created a laboratory of six workstations networked together and connected to the Internet. All six computers were identically configured on the Microsoft Windows XP operating system. One group of computers (three machines) served as the control group (i.e., knowledgeable user), and the other group served as the test group (i.e., uneducated user). Each computer within the control and test groups was set up with a different Web browser. Specifically, within each group, one computer had Microsoft’s Internet Explorer installed, the second had Mozilla Firefox installed, and the third had Netscape Navigator installed. Testers ran a series of nine sessions on each machine using its respective Web browser. Each session consisted of navigating various groups of selected Web sites. After visiting a group of Web sites, we then ran five antispyware tools to detect spyware that may have been installed while visiting those sites. The testers on each computer visited the same Web sites, in the same order, and within the same time frame. The testers were provided with respective rules of behavior when visiting these sites using the control and test group computers (e.g., whether to click on banners, run independent code, install browser add-ons, etc.). The selected groups of Web sites included typical work-related and nonwork-related sites. The selected sample of sites was based on the following factors: Web sites that team members had visited for this engagement, including the Web sites for each of the 24 CFO Act agencies; government and personnel Web sites for federal employees; nonwork-related Web sites as selected by team members; and corroboration by reports generated from our CIO department’s Web- filtering tool. From among the identified sites that met these criteria, we used our professional judgment and selected the following Web site groups: (1) government agencies/services, (2) news media, (3) streaming media, (4) financial institutions/e-banking, (5) gambling, (6) games, (7) personals/dating, (8) shopping, and (9) Web search. After our 2-week test period was concluded, we analyzed log data and formed general conclusions about the security risks and effects of the spyware that was downloaded from our Web site navigations. To determine the 24 CFO Act agencies’ reported perceptions of the risks from spam, phishing, and spyware and their actions and plans to mitigate them, we developed a series of questions about emerging cybersecurity threats including spam, phishing, and spyware that were incorporated into a Web-based survey instrument. We pretested our survey instrument at two federal departments and internally at GAO through our CIO. For each agency to be surveyed, we identified the CIO office, notified each of our work, and distributed a link to access the Web-based survey instrument to each via e-mail. In addition, we discussed the purpose and content of the survey instrument with agency officials when requested. All 24 agencies responded to our survey. We did not verify the accuracy of the agencies’ responses; however, we reviewed supporting documentation that agencies provided to validate their responses. We contacted agency officials when necessary for follow-up information. We then analyzed agency responses to determine agencies’ perception of risks from spam, phishing, spyware, and other malware, as well as their practices in addressing these threats. Although this was not a sample survey, and, therefore, there were no sampling errors, conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the survey instrument, the data collection, and the data analysis to minimize these nonsampling errors. For example, a survey specialist designed the survey instrument in collaboration with subject-matter experts. Then, it was pretested to ensure that the questions were relevant, clearly stated, and easy to comprehend. Because this was a Web-based survey, 23 of the 24 respondents entered their answers directly into the electronic questionnaire, thereby eliminating the need to have much of the data keyed into a database and thus minimizing an additional potential source of error. For the remaining agency, which provided a separate file of its survey responses, the data entry was traced and verified. To determine the government and private-sector efforts under way to address spam, phishing, and spyware on a national level as well as the governmentwide challenges to protecting against these threats, we conducted literature searches, reviewed available federal and private- sector documentation, and solicited agencies’ input on incident reporting in our survey. In addition, we met with security experts in the private sector and federal officials from homeland security, law enforcement, and the intelligence community to discuss their experiences, practices, and challenges in addressing these threats. We conducted our work in Washington, D.C., from September 2004 through March 2005, in accordance with generally accepted government auditing standards. Federal agencies are facing a set of emerging cybersecurity threats that are the result of changing sources of attack, increasingly sophisticated social engineering techniques designed to trick the unsuspecting user into divulging sensitive information, new modes of covert compromise, and the blending of once distinct types of attack into more complex and damaging forms. Spam, phishing, and spyware are examples of emerging threats that are becoming more prominent. Advances in antispam measures have caused spammers to evolve their techniques to bypass detection. Also, the frequency and sophistication of phishing attacks increased rapidly in the past year. Further, spyware has proven to be difficult to detect and remove. For several years, the distribution of unsolicited commercial e-mail— commonly referred to as spam—has been a nuisance to organizations, inundating them with e-mail advertisements for products, services, and inappropriate Web sites. The Anti-Spam Technical Alliance reports that while spam has been an annoyance to Internet users for many years, the spam nuisance today is significantly worse, both in the quantity and the nature of the material received. Experts have stated that spam makes up over 60 percent of all e-mail. Two fundamental issues underscore the spam problem. First, spam is a profitable business. Experts have commented that unsolicited commercial e-mail continues to be a problem because it is profitable: not only is sending spam inexpensive, but a percentage of targeted consumers open the messages, and some purchase the advertised items and services. Second, e-mail messages do not contain enough reliable information to enable recipients to determine if the message is legitimate or forged. As a result, spammers can forge an e-mail header so that the message appears to have originated from someone or somewhere other than the actual source. Advances in antispam measures have caused spammers to make their techniques more sophisticated to bypass detection and filtration. Some of these methods include inserting random text, using alternate spellings, using various characters that look like letters, disguising the addresses in e- mails, and inserting the text as an image so that the filter cannot read it. Further, compromised systems are regularly being used to send spam, with experts estimating that such systems deliver 40 percent of all spam. Not only has this made it more difficult to track the source of spam, but the potential for financial gain has resulted in spammers, malware writers, and hackers combining their respective methods into a blended attack. Phishing is a high-tech scam that frequently uses spam or pop-up messages to deceive people into disclosing their credit card numbers, bank account information, Social Security number, passwords, or other sensitive information. The frequency and sophistication of phishing attacks increased rapidly in 2004. As defined by the FTC, phishers send an e-mail or pop-up message that claims to be from a business or organization that users deal with—for example, Internet service providers, banks, online payment services, or government agencies. The message typically says that users need to “update” or “validate” their account information, and might threaten some dire consequence if users do not respond. The message directs users to a Web site that looks just like a legitimate organization’s site, but is not. The fraud tricks users into divulging personal information so the phishers can steal their identity. Phishing is conducted through spam, malware, and blended threats, as well as through e-mail. Phishing scams use a combination of social engineering and technical methods to deceive users into believing that they are communicating with an authorized entity. In social engineering, an attacker uses human interaction—or social skills—to obtain or compromise information about an organization or its computer systems. In addition to using their social skills, phishers use technical methods to create e-mail and Web sites that appear legitimate, often copying images and the layout of the actual Web site that is being imitated. Further, phishers exploit software and system vulnerabilities to reinforce users’ perceptions that they are on a legitimate Web site. For example, phishers use various methods to cause the browser’s Web address display to show a legitimate site’s address instead of the actual Web address of the fraudulent site. Phishers also use browser scripting languages to position specially created graphics containing fake information over key areas of a fraudulent Web site, such as covering up the real address bar with a fake address. In addition, phishers can fake the closed lock icon on browsers that is used to signify that a Web site is protecting sensitive data through encryption.“Pharming” is another method used by phishers to deceive users into believing that they are communicating with a legitimate Web site. Pharming uses a variety of technical methods to redirect a user to a spoofed Web site when the user types in a legitimate Web address. For example, one pharming technique is to “poison” the local domain name server (DNS), which is an Internet service that translates domain names like www.congress.gov into unique numeric addresses. Poisoning a DNS involves changing the specific record for a domain, which results in sending users to a Web site very different from the one they intended to access—without their knowledge. DNS poisoning can also be accomplished by exploiting software vulnerabilities. Other pharming methods use malware to redirect the user to a fraudulent Web site when the user types in a legitimate address. A growing trend in phishing scams is the use of malware to steal information from users. These scams depend on system characteristics (e.g., existence of specific vulnerabilities, lack of security controls) to deploy payload mechanisms, such as viruses and Trojan horses. Social engineering is used to convince users to open an e-mail attachment or visit a malicious Web site, causing the malware to install. The malware could record users’ account details when they visit an online banking Web site, and the captured information is then sent to the phishers. A widely accepted definition of spyware does not currently exist; various definitions and descriptions of spyware have been proposed by security experts and software vendors, and the definition of spyware has even varied among proposed legislation. These definitions vary based on factors such as whether the user has consented to the downloading of the software to his or her computer, the types of information it collects, and the nature and extent of the harm caused. However, the gathering and dissemination of information by spyware can be grouped into two primary purposes: advertising and surveillance. Spyware can be used to deliver advertisements to users, often in exchange for the free use of an application or service. It can collect information such as a user’s Internet Protocol address, Web surfing history, online buying habits, e-mail address, and software and hardware specifications. It often provides end users with targeted pop-up advertisements based on their Web-surfing habits. Spyware has also been known to change browser domain name system settings to redirect users to alternate search sites filled with advertisements. Some spyware places highlighted advertising links over keywords on normal Web pages. Other spyware is used for surveillance and is designed specifically to steal information or monitor information access. It may range from keyloggers to software packages that capture and transmit records of virtually all activity on a system. Software that is used to advertise or collect information has both legitimate and illegitimate uses. Various experts classify software used for advertising as either adware or spyware, depending on the previously mentioned factors. Additionally, surveillance applications can be used by organizations as legitimate security devices. This further underscores the difficulty in defining spyware. The FTC defines spyware as “software that gathers information about a person or an organization without their knowledge and that may send such information to another entity without the consumer’s consent, or that asserts control over computers without the consumer’s knowledge.” For the purposes of this report, we are substituting the word “user” for “consumer.” Users are deceived into installing spyware onto their systems because spyware authors and distributors use various social engineering techniques to induce users to install their spyware. For example, users could receive pop-up advertisements claiming that their systems are infected with spyware and advising them that they should download the displayed software to remove the spyware; however, instead of downloading removal software, users end up downloading spyware itself. See figure 1 for an example of such a deceptive pop-up window. Security experts have noticed spyware that presents a user with a pop-up asking if the user wants to install the application; however, regardless of what the user chooses, spyware is installed. Further, peer-to-peer software—programs that facilitate file sharing—are often packaged with numerous spyware applications. While the behavior of the bundled spyware is often mentioned in the end-user license agreement (EULA), the EULA is typically long and confusing. EULAs often use large text print in small windows; in some cases users would have to page down more than 100 times to read it all. Additionally, the descriptions of what the application installs are often hidden or incomplete. While some spyware tricks users into installing, other spyware spreads by exploiting security vulnerabilities and low security settings in e-mail and Web browsers—for example, when a user on a system with known software flaws opens a malicious e-mail or visits a malicious Web site. Further, low-security settings of Web browsers may allow malicious scripts to install spyware onto systems. Additionally, some variants of worms and viruses install spyware after they have infected a system. Persons with access can also physically install spyware onto a system. Spyware is difficult to detect by users. A study by the National Cyber Security Alliance and America Online found that 89 percent of users who were found to have spyware on their systems were unaware that it was there. Even if users notice changes to their systems, they may not realize what caused the change and may not consider that there is any risk—thus the incident may go unreported. Additionally, browser helper objects can be especially difficult for users to detect because their operations are generally invisible to users. Spyware also employs techniques to avoid detection by antivirus and antispyware applications that search for specific “signature strings” that characterize known malicious code. Beyond the problem of detection, the removal of spyware is an additional difficulty. It typically does not have its own uninstall program, forcing users to manually remove spyware or use a separate tool. Many spyware programs install numerous files and directories and make multiple changes to key system files. Some spyware will install multiple copies of itself onto a system, so that when a user removes one copy, another copy reinstalls itself. Spyware has also disabled antivirus and antispyware applications, as well as firewalls, to avoid detection. Agencies face significant risks from these emerging cybersecurity threats. Spam consumes employee and technical resources and can be used as a delivery mechanism for malware and other cyber threats. Agencies and their employees can be victims of phishing scams. Further, spyware puts the confidentiality, integrity, and availability of agency systems at risk. Spam is a growing security problem for organizations, users, and networks because it has the potential to breach the confidentiality, integrity, and availability of information systems when used as a delivery mechanism for other threats. While spam is often used for marketing, it is also used to distribute malware, including viruses, worms, spyware, and Trojan horses, as well as phishing scams. Once delivered, these threats can violate the confidentiality, integrity, and availability of systems. Moreover, spam can be used to cause a denial-of-service attack. Spam may also deliver offensive materials that can create liability concerns for organizations. Further, the sheer quantity of spam hampers productivity, requires technical support, and consumes bandwidth. Spam has made it necessary for organizations to allocate additional resources to manage its risk, including antispam software and increased storage space. Federal agencies and employees can be victims of phishing scams. We identified two main categories of phishing based on their threats and victims: (1) employee-targeted phishing that is received by employees of agencies and (2) agency-exploiting phishing that spoofs the identity of an agency to facilitate a phishing scam. Although phishing scams have exploited the identities of online financial and auction sites such as US Bank, Citibank, eBay, and PayPal, phishers have also exploited federal agencies and Web portals such as the FBI, FDIC, IRS, and the Regulations.gov Web site (see fig. 2). A phishing scam can result in the exposure of user access information, which can lead to unauthorized access and the loss and manipulation of sensitive data. Employee-targeted phishing scams can result in the release of personal employee or agency information, such as usernames and passwords. Employees who fall for phishing scams can also become victims of identity theft. Additionally, as a part of a phishing scam, a user could visit a Web site that installs malicious code, such as spyware. Phishing is a risk to public and private-sector organizations alike. Phishers often pose as reputable organizations such as banks or federal agencies to appear as legitimate requests for information. According to Gartner, Inc., the direct phishing-related loss to U.S. banks and credit card issuers in 2003 is estimated at $1.2 billion. Indirect losses are considered to be much higher, including customer service expenses, account replacement costs, and higher expenses due to customers’ decreased use of online services. Consequently, agency-exploiting phishing scams may go beyond the purview of the agency CIO. For example, one agency CIO noted that although he had the ability to apply FISMA-required practices to his agency’s systems and networks, the agency’s response was not limited to the CIO’s actions. He indicated that the agency’s public affairs department, federal law enforcement agencies, and Internet service providers were all affected by the phishing scam. Researchers have noted the potential for phishing scams to disrupt the growth of electronic commerce in general. Phishing scams that exploit a federal agency’s identity could cause citizens to lose trust in e-government services. Spyware threatens federal information systems by compromising their confidentiality, integrity, and availability through its ability to capture and release sensitive data, make unauthorized changes to systems, decrease system performance, and create new system vulnerabilities. Spyware can allow attackers to obtain sensitive information and gain unauthorized access to sensitive information. Both advertising and surveillance spyware can collect information. Advertising spyware typically collects information such as a user’s browsing habits and demographic information to produce targeted advertisements. However, both types of spyware are capable of collecting user names and passwords, personally identifiable information, credit card numbers, e-mail conversations, and other sensitive data. NIST notes that spyware can collect just about any type of information on users that the computer has stored. For example, certain remote administration tools can take control over a Webcam and microphone, capturing both visual and vocal activity. Spyware can change the appearance of Web sites and modify what pages users see in their Web browsers. For example, spyware can modify search results and forward users to Web sites with questionable content, such as malicious and pornographic sites, potentially resulting in liability risks. In addition, spyware can change system configurations to make systems more vulnerable to attack by, for example, disabling antivirus and antispyware software and firewalls. Spyware is often responsible for significant reductions in computer performance and system stability through its consumption of system and network resources. Users have reported dramatic decreases in their computer and Internet performance, which can be attributed to multiple instances of spyware. Network administrators have also noticed a loss of bandwidth as a result of spyware. Additionally, poorly programmed spyware applications can result in application and system crashes. Microsoft estimates that spyware is currently responsible for up to 50 percent of all computer crashes. Further, improper uninstalls of spyware have been known to disable a system’s Internet connection, and reductions in the availability of systems and the network could decrease employee productivity. Spyware creates major new security concerns as malicious users exploit vulnerabilities in spyware to obtain unauthorized system access. If an organization or user does not know that spyware is on the computer, there is effectively no way to address the associated vulnerabilities. For example, spyware often includes, as a part of an update component, capabilities to automatically download and install additional pieces of code without notifying users or asking for their consent, typically with minimal security safeguards. Additionally, researchers at the University of Washington found that in a certain version of spyware, it was possible for attackers to exploit the update feature to install their own malicious code. Spyware can also redirect users to Web sites that infect systems with malicious code or facilitate a phishing scam. Remote administration tools are intended to provide remote monitoring and recording capabilities, but they also provide malicious users with the means to remotely control a machine. Changes to system configurations could allow spyware to not only remain undetected, but also make systems more vulnerable to future attacks from worms, viruses, spyware, and hackers. In addition to spam, phishing, and spyware, other threats are also emerging, including the increased sophistication of worms, viruses, and other malware and the increased attack capabilities of blended threats and botnets. Malware continues to threaten the secure operation of federal information systems. The CERT Coordination Center (CERT/CC) reported that 3,780 new vulnerabilities were found in 2004. In recent years, security experts have noted that the time between a released vulnerability and an exploitation is decreasing, so that the average time frame between the announcement of vulnerability and the appearance of associated exploitation code is down to 5.8 days. More than 10,000 new viruses were identified in 2004. Agencies are now faced with the formidable task of patching systems and updating security controls in a timely and appropriate manner. New forms of worms and viruses pose challenges to the security of networks. Antivirus software provides protection against viruses and worms. However, polymorphic, metamorphic, and entry-point-obscuring viruses are reducing the effectiveness of traditional antivirus scanning techniques. Polymorphic viruses are self-mutating viruses that use encryption. Specifically, a small decoder, which changes periodically, decrypts the viruses’ main bodies prior to execution. Metamorphic viruses change the actual code of the virus between replications, resulting in significantly different patterns, thus causing it to be undetected by the signature-based tool. Entry-point-obscuring viruses are making detection more difficult by placing the malicious code in an unknown location. Further, these techniques are often used to infiltrate and hide code in a victim’s computer as a base for further criminal activity. Combating these types of viruses requires diligence in maintaining updated antivirus products that employ algorithms to detect these new threats. Blended threats are an increasing risk to organizations. Security analysts have noticed an increase in the number of blended threats, as well as increasingly destructive payloads. Such threats combine the characteristics of different types of malicious code, such as viruses, worms, Trojan horses, and spyware. The multiple propagation mechanisms often used in blended threats allow them the versatility to circumvent an organization’s security in a variety of ways. As a result, blended threats can infect large numbers of systems in a very short time, with little or no human intervention, causing widespread damage very quickly. They can then simultaneously overload system resources and saturate network bandwidth. Figure 3 depicts the ability of some blended threats to bypass security controls. (Other combinations of threats are also possible.) Examples of recent blended threats include MyDoom, Netsky, Sasser, and Sobig. The Sobig worm exemplifies one of the dangers of blended threats. When Sobig successfully infects a computer, it downloads spyware from a Web site, including a keylogger. The keylogger monitors the system for any banking, credit card purchases, or other financial activity and captures user information, passwords, and cookies and sends them back to the authors. Additionally, Sobig downloads an unlicensed copy of the Wingate proxy server, allowing any malicious user who knows the Internet protocol address of the infected machine to channel actions through the system anonymously. Spammers used the proxy to anonymously send unsolicited e-mail. Security experts have noted an increase in the manipulability of attacks. Malicious users are infecting vulnerable systems with bots, which then allow the users to remotely control the systems. Malicious users can command botnets to distribute spam, phishing scams, spyware, worms, viruses, and launch distributed denial-of-service attacks. For example, last year the Department of Justice reportedly found that botnets on government computers were sending spam. The short vulnerability-to- exploitation window makes bots particularly dangerous; once a means of exploiting a vulnerability is known, the owner of the botnet can quickly and easily upgrade the bots, which can then scan target systems for the vulnerability in question, vastly increasing the speed and breadth of potential attacks. Agencies’ responses to our survey indicated varying perceptions of the risks of spam, phishing, and spyware. Many agencies have not fully addressed the risks of emerging cybersecurity threats as part of their agencywide information security programs, which include FISMA-required elements such as performing periodic assessments of risk; implementing security controls commensurate with the identified risk; ensuring security- awareness training for agency personnel; and implementing procedures for detecting, reporting, and responding to security incidents. An effective security program can assist in agency efforts to mitigate and respond to these emerging cybersecurity threats. According to agency responses, most agencies (19 of 24) identified nonsecurity effects from spam. They identified several incidents of spam that reduced their systems’ performance and the productivity levels of their users and their information technology staff. Other costs associated with spam include the use of network resources and the costs of filtering e-mail. Of these 19 agencies, 14 reported that spam consumed network bandwidth used to transmit messages or consumed disk storage used to store messages. However, only 1 agency identified the risk that spam presents for delivering phishing, spyware, and other threats to their systems and employees. Also, 14 of 24 agencies reported that phishing had limited to no effect on their systems and operations. Two agencies indicated that they were unaware of any phishing scams that had specifically targeted their employees, while 6 agencies reported a variety of effects, including the increased need for help desk support and instances of compromised credit card accounts. Further, in a follow-up discussion, an agency official noted that phishing is primarily a personal risk to employees and that employees who fall victim to phishing scams could face personal security issues related to identity theft that could reduce their productivity. In addition, 5 agencies reported that spyware had minimal to no effect on their systems and operations, while 11 noted that spyware caused a loss of employee productivity or increased usage of help desk support. Of the remaining 4 agencies that reported spyware effects, 2 noted the decreased ability for their users to utilize agency systems: 1 agency noted that users had been unable to connect to an agency network, while the other indicated that users had experienced a denial of service after an antispyware tool had been implemented. Finally, 1 agency reported the costs associated with developing and implementing antispyware tools, and another stated that spyware was simply a nuisance to its users. As discussed in chapter one, FISMA charges agencies with the responsibility to create agencywide information security programs that include periodic assessments of risk; implement security controls that are commensurate with the identified risk; conduct security awareness training for agency personnel, including contractors; and implement procedures for detecting, reporting, and responding to security incidents. However, according to their survey responses, agencies have not fully addressed the risks of emerging cybersecurity threats as part of their agencywide security programs. While risk assessments are a key information security practice required by FISMA, most surveyed agencies reported not performing them to determine whether the agency name or its employees are susceptible to phishing scams. Of the 24 agencies we surveyed, 17 indicated that they have not assessed this risk. In addition, 14 agencies reported that at least one employee experienced a phishing scam. By not performing risk assessments, agencies are vulnerable to unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems that support the operations and assets of their respective agencies. In fact, several agencies have had their identities exploited in phishing scams, as summarized in table 2. NIST has issued guidance to agencies on risk management and has developed a security self-assessment guide. NIST’s Risk Management Guide for Information Technology Systems defines risk management as the process of identifying risk, assessing risk, and taking steps to reduce risk to an acceptable level. The guide provides a foundation for the development of an effective risk management program for assessing and mitigating risks identified within IT systems. Additionally, NIST’s Security Self-Assessment Guide for Information Technology Systems provides a method for agency officials to determine the current status of their information security programs and, where necessary, establish a target for improvement. Further, as part of its FISMA requirements, NIST issued its Standards for Security Categorization of Federal Information and Information Systems, which establishes security categories for both information and information systems. The security categories are based on the potential impact on an organization should certain events occur that jeopardize the information and information systems needed by the organization to accomplish its assigned mission, protect its assets, fulfill its legal responsibilities, maintain its day-to-day functions, and protect individuals. Security categories are to be used in conjunction with vulnerability and threat information in assessing the risk to an organization. Vendors are increasingly providing automated tools to mitigate the risks of spam, phishing, and spyware at an enterprise level. However, according to several agencies responding to our survey, current enterprise tools to address emerging cybersecurity threats are immature and therefore impede efforts to effectively detect, prevent, remove, and analyze incidents. Officials at the Department of Justice noted that although there was a lack of enterprise software solutions that could rapidly detect and analyze behavioral anomalies, in the absence of a purely technological solution, system administrators could exercise greater control over federal systems by implementing tighter security controls. For example, agencies could limit users’ rights to modify and change certain features on their computers. This control could greatly reduce agencies’ susceptibility to compromise from these types of exploits. Indeed, one agency noted that they were able to keep most spyware out of their systems by enforcing policy and user privileges at the network level. Further, we and NIST have advised agencies on how to protect their networks from these threats by using a layered security (defense-in-depth) approach. Layered security implemented within an agency’s security architectures includes the use of strong passwords, patch management, antivirus software, firewalls, software security settings, backup files, vulnerability assessments, and intrusion detection systems. Figure 4 depicts an example of how agencies can use layered security controls to mitigate the risk of individual cybersecurity threats. Most agencies (20 of 24) reported implementing agencywide approaches to mitigating spam. Enterprise antispam tools are available to filter incoming e-mails. These tools enable agencies to reduce the amount of spam that reaches employees and use various techniques to scan e-mail to determine if it is spam. Filters can also use antivirus technologies to detect malicious code. E-mail services can be outsourced, fully or in part, to companies that manage the e-mail operations, including filtering for spam, phishing scams, and malware. See appendix II for more detailed information on antispam tools and services. However, agencies reported concerns that these tools could not be relied upon to accurately distinguish spam from desired e-mails. Some observed that spammers are evolving and adapting their spamming techniques to bypass the filtering rules and signatures that antispam tools are based on. One agency reported that false positives were a larger concern than false negatives, as users place a high priority on receiving all legitimate e-mails and do not accept lost messages as a result of faulty e-mail filtering. Furthermore, the agency reported that outgoing e-mails could be falsely blocked by antispam tools used by the intended recipients. Consequently, federal agencies are challenged to continually monitor and adjust their filtering rules to mitigate false positives and false negatives. Many agencies stressed that the constant evaluation and modification that are required by current spam filtering solutions demand a significant investment in resources. Although phishing scams are typically distributed through mass e-mail (much like spam distribution), several agencies reported that limited technical controls are available to effectively scan e-mail in order to identify a phishing message. One agency related challenges in determining how to utilize an automated tool to control employees’ Internet browsing behaviors—without also restricting Internet access that is needed to perform job-related functions. Agencies can also utilize traditional enterprise antispam tools to mitigate the risks from employee-targeted phishing, as these tools are increasingly providing antiphishing capabilities that can also detect and block known phishing scams using content-based or connection-based techniques. Agencies cannot rely on these tools as a complete solution; because antiphishing tools typically quarantine suspected phishing e-mail, a person must review each quarantined message in order to make a final determination of the message’s legitimacy. DHS’s Homeland Security Advanced Research Projects Agency recognized the need for additional tools and techniques that defend against phishing and in September 2004 published a solicitation for proposals to research and develop these technologies. The solicitation notes that antiphishing solutions must work for all types of users and, most importantly, for less sophisticated users, who are those most likely to fall for phishing scams. The agency also warned that any technology that requires end-users to change their behavior will face hard challenges and that the solutions must be easily integrated into existing information infrastructure. Agencies can also take steps to reduce the likelihood of having their identities used to facilitate a phishing scam. For example, organizations can actively search for abuse of their trademarks, logos, and names. These searches typically focus on trademark or copyright infringement, but have also proven useful in proactively discovering phishing scams. However, one federal official noted that agencies are not using Web-crawling tools to proactively identify potential agency-exploiting phishing and felt that the reluctance to use such tools comes, in part, from privacy and legal concerns. Establishing clear communication practices with customers can also reduce the success rate of phishing scams. Good communication policies reduce the likelihood that consumers will confuse a phishing scam with a legitimate message. Good communication practices include having a consistent look and feel, never asking for passwords or personal information in e-mail, and making e-mail more personalized. Responding quickly and effectively can reduce the damage of phishing scams. Because phishing scams are typically hosted and operated outside of an organization’s network, a response plan to phishing scams will often require cooperation with external entities such as Internet service providers. The response could include shutting down a Web site and preserving evidence for subsequent prosecution of the phishers. Other practices include notifying consumers by e-mail or a Web site warning when an incident occurs to inform consumers about how to respond. Further, experts recommend that organizations contact law enforcement. Properly secured e-government services could reduce the risk of an agency’s identity being used in a phishing scam. Phishers exploit vulnerabilities in the code of Web sites in order to facilitate their scams; secure code reduces the likelihood that an attack of this type will be successful. NIST offers guidance to agencies on how to secure their systems, including Web servers, and considerations that should be made when using active content. FDIC has made several recommendations that financial institutions and government could consider applying to reduce online fraud, including phishing. FDIC recommends that financial institutions and government consider (1) upgrading existing password-based single-factor customer authentication systems to two-factor authentication; (2) using scanning software to proactively identify and defend against phishing attacks; (3) strengthening educational programs to help consumers avoid online scams, such as phishing, that can lead to account hijacking and other forms of identity theft, and taking appropriate action to limit their liability; and (4) placing a continuing emphasis on information sharing among the financial services industry, government, and technology providers. The further development and use of fraud detection software to identify account hijacking, similar to existing software that detects credit card fraud, could also help to reduce account hijacking. In response to our question on spyware-related challenges, about one-third of surveyed agencies highlighted the immaturity of enterprisewide tools and services that effectively detect, defend against, and remove spyware. Six agencies also emphasized the spyware-related challenges of identifying or detecting incidents. Traditional security tools, including firewalls and antivirus applications, offer only limited protection against spyware. While firewalls are used to protect a network or a PC from unauthorized access, firewalls are limited in their ability to distinguish spyware-related traffic from other, harmless Web traffic. For example, browser helper objects are not stopped by firewalls, because firewalls see them as Web browsers. Additionally, spyware is typically downloaded by a user onto a system, which enables the spyware to bypass typical firewall protection. However, firewalls can at times detect spyware when it attempts to request access to the Internet. Antivirus applications have limited capabilities to detect and remove spyware. Antivirus vendors are beginning to include spyware protection as a part of their overall package; however, Gartner, Inc., reports that major antivirus vendors continue to lag on broader threats, including spam and spyware. The behavior of spyware is different from that of viruses, such that antivirus applications could fail to detect spyware. NIST includes antispyware tools as part of its recommended security controls for federal information systems. Antispyware tools detect and remove spyware, block it from running, and can prevent it from infecting systems. Although desktop antispyware tools are currently available, their use by agencies would cause additional problems, such as difficulties in enforcing user utilization and updating of the tools. Agencies confirmed NIST’s recommendation to consider the use of multiple antispyware tools because the technologies have different capabilities and no single tool can detect all spyware. The results of our spyware test confirmed these variances; the scans from five antispyware tools consistently identified different spyware. According to several agency responses, some of the most effective antispyware tools are freeware applications, but they do not have the capability to centrally manage a large deployment of systems. In addition, officials at one agency noted that it is difficult to track data being transmitted by spyware. Although current tools such as firewalls may assist in tracking incidents, spyware incidents are difficult to measure because spyware transmits using the same communications path as legitimate Web traffic. Indeed, our spyware test proved the difficulty in analyzing such spyware transmissions; the Internet traffic logs from a single hour of Web browsing resulted in more than 30,000 pages of text that could not be effectively reviewed without automated analysis tools. Software vendors have recognized the need for enterprise antispyware applications. Antivirus and intrusion-detection vendors have recently added antispyware features to their base products, and corporate applications have recently been placed on the market to detect and block known spyware while providing larger enterprises with centralized administration. These enterprise antispyware tools enable network administrators to combat spyware from a central location. With an enterprise solution, an antispyware program is installed on each computer system (client) and communicates with a centralized system. The central system updates individual clients, schedules scans, monitors the types of spyware that have been found, and determines if the spyware was successfully removed. As with many antivirus efforts, a major limitation for some antispyware tools is that in order to detect the spyware, the tool has to have prior knowledge of its existence. Thus, as with many antivirus tools, certain antispyware tools must be updated regularly to ensure comprehensive protection. Evolving enterprisewide tools may provide the ability to establish rules that can address various categories of potential spyware behavior. For more information on antispyware tools, see appendix III. Without an ability to centrally detect spyware, agencies will have a difficult time fulfilling FISMA’s incident-reporting requirements. Agencies reported that employee awareness was a significant challenge as they worked to mitigate the risks associated with phishing and spyware. As discussed in chapter 1, agencies are required by FISMA to provide security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency. However, of the 24 agencies we surveyed, 13 reported that they have or plan to implement phishing awareness training this fiscal year, 3 reported plans to implement training in the future, and 3 had no plans to implement phishing awareness training. Agencies reported efforts to increase their employees’ awareness of phishing scams and the risks associated with revealing personal information over the Internet. Specifically, 10 agencies reported utilizing bulletins, notices, or e-mails to alert users to the methods and dangers of phishing scams. Further, 16 agencies indicated that they had implemented or planned to implement agencywide phishing guidance this fiscal year. Nevertheless, agencies reported a variety of user awareness challenges, including training their users to avoid visiting unknown Web sites, to verify the source of any request for sensitive or personal data, to be knowledgeable of new phishing scams, and to report any scams to the agency. Other challenges noted were the increased sophistication of phishing scams and the need for users to be continually updated about the changing threat. Further, of the 11 agencies that responded to our question on spyware awareness training, 7 indicated that they had or planned to implement training this fiscal year, 1 reported plans to implement training in the future, and 3 indicated that they had no plans to implement training. Five agencies reported plans to distribute agencywide spyware guidance in the form of bulletins or e-mails. However, when asked to identify spyware- related challenges, 6 agencies highlighted the difficulty of ensuring that their employees are aware of the spyware threat. One agency noted that users inadvertently reintroduce spyware; this could be mitigated if users were made aware of the browsing behaviors that put them at risk for downloading spyware. Moreover, agency officials confirmed that user awareness of emerging threats is still lacking and that significant improvements must be made. FISMA requires agencies to develop and implement plans and procedures to ensure continuity of operations for their information systems. In addition, NIST guidance advises agencies that their incident-response capability should include establishing guidelines for communicating with outside parties regarding incidents and also discusses handling specific types of incidents, including malicious code and unauthorized access. However, our review of agencies’ incident-response plans found that while they largely address the threat of malware, they do not fully address phishing or spyware. Specifically, our analysis of the incident-response plans or procedures provided by the 20 agencies showed that none specifically addressed spyware or phishing. However, all of these plans addressed malware and incidents of unauthorized access (which are potential risks for phishing and spyware). Further, 1 agency indicated that spyware is not considered significant enough to warrant reporting it as a security incident. Determining what an incident is and how it should be tracked varies considerably among agencies. For example, 1 agency noted that each intrusion attempt is considered an incident, while another agency reported that one incident can involve multiple users or systems. Because spyware is not detected and removed according to a formalized procedure, much of the information on the local machine would be destroyed and not maintained as evidence for an investigation of a computer crime. As a result, this information would not be available to aid in discovering what happened or in attributing responsibility for the crime. Recognizing the potential risks that emerging cybersecurity threats pose to information systems, several entities within the federal government and private sector have begun initiatives directed toward addressing spam, phishing, and spyware. These efforts range from targeting cybercrime to educating the user and the private-sector community on how to detect and protect systems and information from these threats. While the initiatives demonstrate an understanding of the importance of cybersecurity and emerging threats and represent the first steps in addressing the risks associated with emerging threats, similar efforts are not being made to help federal agencies address such risks. Both the public and private sector have noted the importance of user education and consumer protection relating to emerging cybersecurity threats. FTC has been a leader in this area, issuing consumer alerts and releasing several reports on spam, as well as providing guidance for businesses on how to reduce the risk of identity theft. FTC also updates and maintains useful cybersecurity information on its Web site at www.ftc.gov, including its Identity Theft Clearinghouse, an online resource for taking complaints from consumers. This secure system can be accessed by law enforcement, including the Department of Justice. In addition, FTC has sponsored various events, including a spam forum in the spring of 2003, a spyware workshop in April 2004, and an e-mail authentication summit in the fall of 2004. As the threat of phishing has increased, so has the number of groups aimed at informing and protecting consumers against this emerging cybersecurity threat. The Anti-Phishing Working Group, created in the fall of 2003, is an industry association focused on eliminating the identity theft and fraud that result from the growing problem of phishing and e-mail spoofing. The working group provides a forum for discussing phishing issues, defines the scope of the phishing problem in terms of hard and soft costs, and shares information and best practices for eliminating the problem. Where appropriate, the working group also shares this information with law enforcement. Additionally, the Phish Report Network, a recently formed group, enables companies to reduce online identity theft by safeguarding consumers from phishing attacks. Claiming to be the first worldwide antiphishing aggregation service, the Phish Report Network provides subscribers with a mechanism for staging a united defense against phishing. Industry experts agree that the escalating phishing problem, if unabated, will continue to result in significant financial losses. The Phish Report Network aims to significantly reduce these losses by preventing online fraud and rebuilding consumer confidence in online channels. The network is comprised of senders and receivers. Any company being victimized by phishing attacks, such as a financial services or e-commerce company, can subscribe to the Phish Report Network as a sender and begin immediately and securely reporting confirmed phishing sites to a central database. Other companies, such as Internet service providers, spam blockers, security companies, and hosting companies, can join the Phish Report Network as receivers. Subscribing as a receiver provides access to the database of known phishing sites submitted by the senders. Using this information, receivers can protect consumers by blocking known phishing sites in various software, e-mail, and browser services. Additionally, real-time notifications of new phishing sites are available to receivers to ensure up-to-the-minute protection against the latest attacks. Further, the United States Internet Service Provider Association serves both as the Internet service provider community’s representative during policy debates and as a forum in which members can share information and develop best practices for handling specific legal matters. Association officials plan to produce guidance on spam and phishing. Currently, the association focuses on taking down sites that have been spoofed and contacts banking institutions for their coordination when necessary. It also offers insight to federal agencies in the case of a phishing incident, noting that enterprises/agencies need to act quickly when they detect a problem and contact the relevant providers and try to preserve potential evidence. Going to the authorities, such as the FBI, will not stop a phishing attack or a botnet immediately. Law enforcement is an important component, but enterprise/agency security officials need to plan for responding to attacks and coordinating their efforts with their contractors and Internet service providers. Lastly, FDIC states that the only real solution for combating phishing is through consumer education. FDIC officials believe phishing is a very dangerous threat because it undermines the public’s trust in government. For this reason, FDIC’s public affairs office has instituted a toll-free telephone number for customers to call with questions about the legitimacy of communications purported to come from FDIC. In addition, FDIC maintains a Web page to warn consumers of phishing fraud. In April 2004, the Congressional Internet Caucus Advisory Committee held a workshop on spyware, designed to help Congressional offices reach out and educate their constituents on how to deal with spyware. A variety of educational materials was distributed to assist offices in responding to constituent complaints about spyware. These included a tool to assist offices in posting to their Web sites basic spyware prevention tips for computer users; newsletters on several issues including computer security, spam, and privacy; and materials from other sources—including FTC—for producing a district town hall meeting on spyware and computer security. In March, the FTC revisited the issue of spyware with a follow-up report to its April 2004 workshop. According to the report, the FTC concluded that spyware is a real and growing problem that could impair the operation of computers and create substantial privacy and security risks for consumers’ information. FTC also stated that the problems caused by spyware could be reduced if the private sector and the government took action. The report suggested that technological solutions such as firewalls, antispyware software, and improved browsers and operating systems could provide significant protection to consumers from the risks related to spyware. The report recommended that industry identify what constitutes spyware and how information about spyware should be disclosed to consumers, expand efforts to educate consumers about spyware risks, and assist law enforcement. The report further recommended that the government increase criminal and civil prosecution under existing laws of those who distribute spyware and increase efforts to educate consumers about the risks of spyware. The Department of Justice and FTC have law enforcement authority over specific aspects of cybercrime that relate to spam, phishing, spyware, and malware. When a cybercrime case is generated, FTC first handles the civil component and Justice—including the FBI—follows by addressing the criminal component. Justice and FTC initiatives have resulted in successful prosecutions, but also have highlighted challenges that are specific to the enforcement of cybercrime. FBI’s Cyber Division, established in 2002, coordinates, supervises, and facilitates the FBI’s investigation of those federal violations in which the Internet, computer systems, and networks are exploited as the principal instruments or targets of criminal, foreign intelligence, or terrorist activity and for which the use of such systems is essential to that activity. The Internet Crime Complaint Center, formerly the Internet Fraud Complaint Center, is the unit within the FBI responsible for receiving, developing, and referring criminal cyber crime complaints. For law enforcement and regulatory agencies at the federal, state, and local levels, the Center provides a central referral mechanism for complaints involving Internet- related crimes. It places significant importance on partnering with law enforcement and regulatory agencies and with industry. Such alliances are intended to enable the FBI to leverage both intelligence and subject matter expert resources, pivotal in identifying and crafting an aggressive, proactive approach to combating cybercrime. The Internet Crime Complaint Center has put forth several initiatives in an attempt to fight cybercrime related to spam and phishing: The simultaneously layered approach methodology–Spam (SLAM- Spam) initiative, which began in September 2003, was started under the CAN-SPAM Act and developed jointly with law enforcement, industry, and FTC. This initiative targets significant criminal spammers, as well as companies and individuals who use spammers and their techniques to market their products. The SLAM-Spam initiative also investigates the techniques and tools used by spammers to expand their targeted audience, to circumvent filters and other countermeasures implemented by consumers and industry, and to defraud customers with misrepresented or nonexistent products. Operation Web Snare, another joint effort with law enforcement, targets criminal spam, phishing, and spoofed or hijacked accounts, among other criminal activities. According to officials at the Department of Justice, this sweep, which began in June 2004, has so far resulted in 103 arrests and 53 convictions. Operation Firewall, a joint investigation with several law enforcement agencies and led by the Secret Service, targeted a global cybercrime network responsible for stealing personal information about citizens from companies and selling this information to members of the network. According to Justice officials, this investigation began in July 2003 and resulted in the indictment of 19 cybercriminals and several additional arrests for identity theft, credit card fraud, and conspiracy in October 2004. Finally, Digital PhishNet, a cooperative effort among private-sector companies and federal law enforcement, is an FBI-led initiative to create a repository of information for phishing-related activities in order to more effectively identify, arrest, and hold accountable perpetrators of phishing scams. Phishing is currently being handled by two organizations within Justice’s Criminal Division: the Fraud Section, which deals with identity theft and economic crimes, and the Computer Crime and Intellectual Property Section, which focuses extensively on the issues raised by computer and intellectual property crime. According to Justice officials, the department continues to respond to the challenges presented by spam, phishing, and other emerging threats with new initiatives, investigations, and prosecutions. FTC’s enforcement authority is derived from several laws, including the Federal Trade Commission Act, the CAN-SPAM Act, and the Telemarketing and Consumer Fraud and Abuse Prevention Act, among others. This authority has recently led FTC to sue Seismic Entertainment, its first spyware case. FTC officials claim that Seismic Entertainment placed malicious code on the Seismic Entertainment Web site, which exploited a vulnerability in Internet Explorer such that when a user visited the Web site, software would install, without user initiation or authorization, onto the user’s computer. As a result, the user would receive numerous pop-up advertisements, the user’s homepage changed, and other spyware was installed. Further, certain pop-up advertisements would provide the user with an offer to purchase a product in order to stop the pop-ups from appearing. The FTC was issued a temporary injunction that forces Seismic Entertainment to remove the malicious code from the Web site server and prohibit the dissemination of the software. Another recent case involved Spyware Assassin, an operation that offered consumers free spyware detection scans that “detected” spyware—even if there was none—in order to market antispyware software that does not work. The FTC claims that Spyware Assassin and its affiliates used Web sites, e-mail, banner ads, and pop-ups to drive consumers to the Spyware Assassin Web site, ultimately threatening consumers with dire consequences of having spyware on their machines—such as credit card and identity theft—if they did not accept the free “scan.” The free “scan” displays an “urgent error alert,” indicating that spyware has been detected on the machine and prompts the user to install the latest free update to fix these errors, in which case Spyware Assassin software is installed. FTC has requested that Spyware Assassin and its affiliates be barred from making deceptive claims and is seeking a permanent halt to the marketing scam as well as redress for consumers. As of March 31, 2005, DHS’s National Cyber Security Division (NCSD) had produced minimal guidance to federal agencies on how they should protect themselves from spam, phishing, spyware, or other emerging threats. NCSD supports and enhances other federal and private-sector groups that examine cybersecurity-related issues by looking at what other groups are doing and providing assistance if needed. As NCSD’s operational arm, US- CERT has several initiatives under way to share information on cybersecurity issues and related incident-response efforts. However, NCSD’s communications and efforts pertaining to emerging cybersecurity threats have primarily been directed to the private sector and the general public. For example, we found that almost all of the US-CERT alerts, notices, and bulletins that provided specific guidance on how to address spam, phishing, or spyware were written to help individual users. In fact, the one relevant publication that was targeted to federal agencies was issued over 2 years ago. Further, because this publication focused on instructing agencies on how to filter out a specific spam message, there is no current US-CERT guidance that addresses the security risks of spam to federal agencies—including its capacity to distribute malware. Similarly, law enforcement entities have not provided agencies with information on how to appropriately address emerging cybersecurity threats. For example, the FBI has not issued any guidance to federal agencies or provided any detailed procedures for responding to spam, spyware, phishing, or botnets that would maintain evidence needed for a computer crime investigation. Also, the Secret Service has not created any initiatives specifically examining the risk of phishing attacks against the federal government or the fraudulent use of federal government identities. Further, the Secret Service has not distributed information to federal agencies about what measures they can take to protect their agencies from being targeted in a phishing scam. Although federal agencies are required to report incidents to a central federal entity, they are not consistently reporting incidents of emerging cybersecurity threats. Pursuant to FISMA, OMB and DHS share responsibility for the federal government’s capability to detect, analyze, and respond to cybersecurity incidents. However, governmentwide guidance has not been issued to clarify to agencies which incidents they should be reporting, as well as how and to whom they should report. Without effective coordination, the federal government is limited in its ability to identify and respond to emerging cybersecurity threats, including sophisticated and coordinated attacks that target multiple federal entities. Agencies are not consistently reporting emerging cybersecurity incidents such as phishing and spyware to a central federal entity. As discussed in chapter 1, agencies are required by FISMA to develop procedures for detecting, reporting, and responding to security incidents—including notifying and consulting with the federal information security incident center for which OMB is responsible. OMB has transferred the operations for this center to DHS’s US-CERT. However, our analysis of the incident response plans and procedures provided by 20 agencies showed that none specifically addressed phishing or spyware. Further, general incident reporting varies among the agencies; while some report cyber incidents to US-CERT, other agencies report incidents to law enforcement entities, while still others do not report incident information outside their agency. Indeed, the inspector general for one agency reported that more than half of the agency’s organizations did not report malicious activity, federal law enforcement was notified only about some successful intrusions, and attacks originating from foreign sources were not consistently reported to counterintelligence officials. Discussions with US-CERT officials confirmed that they had not consistently received incident reports from agencies and that the level of detail that accompanies an incident report may not provide any information about the actual incident or method of attack. Further, they noted that agencies’ efforts to directly report incidents to law enforcement could be duplicative, because US-CERT forwards incidents with criminal elements to its law enforcement division. According to DHS officials, these incident reports are always passed to the FBI and the Secret Service. The agencies’ inconsistent incident reporting results from the lack of current federal guidance on specific responsibilities and processes. As of March 2005, neither OMB nor US-CERT had issued guidance to federal agencies on the processes and procedures for reporting incidents of phishing, spyware, or other emerging malware threats to US-CERT. As previously discussed, OMB’s FISMA responsibility to ensure the operation of a central federal information security center—US-CERT—involves ensuring that guidance is issued to agencies on detecting and responding to incidents, incidents are compiled and analyzed, and agencies are informed about current and potential information security threats. However, the most recent guidance to federal agencies on incident-reporting roles and processes was issued in October 2000—prior to the establishment of US- CERT. According to officials at US-CERT, the level of detail that accompanies an incident report may not provide any information about the actual incident or method of attack. In fact, the incident reporting guidelines on US-CERT’s Web site only provide agencies with the time frames for reporting incidents and do not specify the actual incident information that should be provided. For example, while the guidance indicates that spam e-mail is to be reported to US-CERT on a monthly basis, it does not clarify whether agencies should simply report the number of spam e-mails received or if they should include the text of the spam e- mails as part of the incident report. Without the necessary guidance, agencies do not have a clear understanding of which incidents they should be reporting or how and to whom they should report. In addition to the lack of specific guidance to agencies, the federal government lacks a clear framework for the roles and responsibilities of other entities involved in the collection and analysis of incident reports— including law enforcement. Homeland Security Presidential Directive 7 requires that DHS support the Department of Justice and other law enforcement agencies in their continuing missions to investigate and prosecute threats to and attacks against cyberspace, to the extent permitted by law. Rapid identification, information sharing, investigation, and coordinated incident response can mitigate malicious cyberspace activity. In 2001, we recommended that the Assistant to the President for National Security Affairs coordinate with pertinent executive agencies to develop a comprehensive governmentwide data collection and analysis framework. According to DHS officials, US-CERT is currently working with OMB on a concept of operations and taxonomy for incident reporting. This taxonomy is intended to establish a common set of incident terms and the relationships among those terms and may assist the federal government in clarifying the roles, responsibilities, processes, and procedures for federal entities involved in incident reporting and response—including homeland security and law enforcement entities. According to OMB officials, the final version of the concept of operations and incident reporting taxonomy is to be issued this summer. The lack of effective incident response coordination limits the federal government’s ability to identify and respond to emerging cybersecurity threats, including sophisticated and coordinated attacks that target multiple federal entities. Without consistent incident reporting from agencies, it will be difficult for US-CERT to perform its transferred FISMA responsibilities of providing the federal government with technical assistance, analysis of incidents, and information about current and potential security threats. Emerging cyberthreats such as spam, phishing, and spyware present substantial risks to the security of federal information systems. However, agencies have not fully addressed the risks of these threats as part of their FISMA-required agencywide information security programs. Although the federal government has efforts under way to help users and the private- sector community address spam, phishing, and spyware, similar efforts have not been made to assist federal agencies. Consequently, agencies remain unprepared to effectively detect, respond, and protect against the increasingly sophisticated and malicious threats that continue to place their systems and operations at risk. Moreover, although OMB and DHS share responsibility for coordinating the federal government’s response to cyberthreats, guidance has not been provided to agencies on when and how to escalate incidents of emerging threats to DHS’s US-CERT. As a result, incident reporting from agencies is inconsistent at best. Until incident reporting roles, responsibilities, processes, and procedures are clarified, the federal government will be at a clear disadvantage in effectively identifying, mitigating, and potentially prosecuting sophisticated and coordinated attacks that target multiple federal entities. In order to more effectively prepare for and address emerging cybersecurity threats, we recommend that the Director, Office of Management and Budget, take the following two actions: ensure that agencies’ information security programs required by FISMA address the risk of emerging cybersecurity threats such as spam, phishing, and spyware, including performing periodic risk assessments; implementing risk-based policies and procedures to mitigate identified risks; providing security-awareness training; and establishing procedures for detecting, reporting, and responding to incidents of emerging cybersecurity threats; and coordinate with the Secretary of Homeland Security and the Attorney General to establish governmentwide guidance for agencies on how to (1) address emerging cybersecurity threats and (2) report incidents to a single government entity, including clarifying the respective roles, responsibilities, processes, and procedures for federal entities— including homeland security and law enforcement entities. We received oral comments on a draft of our report from representatives of OMB’s Office of Information and Regulatory Affairs and Office of General Counsel. These representatives generally agreed with our findings and conclusions and supplied additional information related to federal efforts to address emerging cyber threats. This information was incorporated into our final report as appropriate. In commenting on our first recommendation, OMB stressed that the agencies have the primary responsibility for complying with FISMA’s information security management program requirements. Nevertheless, OMB indicated that it would incorporate emerging cybersecurity threats and new technological issues into its annual review of agency information security programs and plans to consider whether the programs adequately address emerging issues before approving them. OMB told us that our second recommendation was being addressed by a concept of operations and taxonomy for incident reporting that it is developing with DHS’s US-CERT. As we indicated earlier in our report, the final document is planned to be issued this summer. OMB officials indicated that the completed document will establish a common set of incident terms and the relationships among those terms and will also clarify the roles, responsibilities, processes, and procedures for federal entities involved in incident reporting and response—including homeland security and law enforcement entities. Additionally, the Departments of Defense, Homeland Security, and Justice provided technical comments via e-mail, which were incorporated as appropriate. NIST is required by FISMA to establish standards, guidelines, and requirements that can help agencies improve the posture of their information security programs. The following table summarizes NIST special publications that are relevant to protecting federal systems from emerging cybersecurity threats. Antispam tools scan, inspect, filter, and quarantine unsolicited commercial e-mail, commonly referred to as spam, while allowing the delivery of legitimate e-mail. These tools can block and allow e-mail sent from specific Internet Protocol (IP) addresses that have been identified as distributors of spam or other connection- or content-based rules. When a spam filtering solution scans e-mail messages, it uses various techniques to detect spam. The most common filtering methods used are whitelists, blacklists, challenge/response systems, content analysis, textual analysis, heuristics, validity checking, and volume filtering. A whitelist accepts mail from users and domains designated by the user or system administrator. These e-mail messages will typically bypass the filter even if they exhibit characteristics that may define them as spam. Similarly, blacklists, also referred to as blocklists, prevent e-mail from specific domains, IP addresses, or individuals from being accepted. Many vendors maintain their own lists and provide optional subscriptions to third-party blacklist services. Content analysis capabilities allow the tools to scan the subject line, header, or body of the e-mail message for certain words often used in spam. Mail that contains certain keywords, executables, or attachments with extensions commonly associated with malware can be filtered. A more sophisticated form of this approach is lexical analysis, which considers the context of words. Such content controls can help organizations enforce their own policy rules. Spam fingerprinting identifies specific spam e-mail with a unique fingerprint, or signature, so that these messages can be recognized and removed. Reverse domain name server lookup allows the receiving mail server to look up the IP address of the sending server to determine if it matches the header information in the e-mail. This allows the tool to determine if the sender is attempting to spoof the mail organization information. This form of validity checking is not commonly used because many systems are not correctly configured to accurately respond to this type of lookup. An increasingly common feature is heuristical analysis, which employs statistical probabilities to determine if the characteristics of an e-mail categorize the message as spam. Each spam characteristic is assigned a score, or spam probability, and if the cumulative score exceeds a designated threshold, the message is labeled as spam. Most heuristic analysis includes adaptive filtering techniques, which can generate rules to identify future spam. A more advanced heuristics-based approach is bayesian filtering, which makes an assessment of both spam-like versus legitimate e-mail characteristics, thereby allowing it to distinguish between spam versus legitimate e-mail. Its self-learning filter is adaptive in learning the e-mail habits of the user, which can allow the tool to be more responsive and tailored to a specific individual. Because a salient characteristic of spam is the bulk quantity in which it is distributed, spam filtering solutions also check for the volume of e-mail sent from a particular IP address over a specific period of time. Other spam protection capabilities include challenge/response systems, in which senders must verify their legitimacy before the e-mail is delivered. This verification process typically requires the sender to respond to a request that requires a human (rather than a computer) to respond. Tools can also employ traffic pattern analysis, which looks for aberrant e-mail patterns that may represent a potential threat or attack. Antispam tools can handle spam in various ways, including accepting, rejecting, labeling, and quarantining messages. Messages that are labeled or quarantined can usually be reviewed by the user to ensure that they have not been misidentified. These tools also have the capability of providing predefined or customized reports, as well as real-time monitoring and statistics. Increasingly, antispam tools provide antiphishing capabilities that can also detect and block phishing scams. Automated antispam solutions yield false positive rates—that is, they incorrectly identify legitimate e-mail as spam. In such instances, a user may not receive important messages because they have been misidentified. Tools can also produce false negatives, which incorrectly identify spam as legitimate e-mail, thereby allowing spam into the user’s inbox. Additionally, the current vendor market is still immature, as it is composed of many smaller vendors with limited history in this market. The rise of botnets also increases the challenge in determining legitimate spam because with more networks distributing smaller amounts of e-mail, it is not as easy to determine the legitimacy of the messages based on the quantity distributed. Further, antivirus vendors have launched or licensed more advanced spam- filtering capabilities into their antivirus engines, thereby providing a more comprehensive tool and increasing competition for point-solution vendors. Finally, because spammers are constantly evolving their techniques, vendors may lag behind in providing the most current capabilities. Antispyware tools provide protection against various potentially unwanted programs such as adware, peer-to-peer threats, and keyloggers, by detecting, blocking, and removing the unwanted programs and also by preventing the unauthorized disclosure of sensitive data. Antispyware solutions protect computer systems against the theft of sensitive information at a central location (desktop or enterprise level). Antispyware tools typically work by scanning computer systems for known potentially unwanted programs, thus relying on a significant amount of prior knowledge about the spyware. These antispyware solutions use a signature database, which is a collection of what known spyware looks like. Therefore, it is critical that the signature information for applications be current. When a signature-based antispyware program is active, it searches files and active programs and compares them to the signatures in the database. If there is a match, the program will signal that spyware has been found and provide information such as the threat level (how dangerous it is). Some tools are able to block spyware from installing onto a system by using real-time detection. Real-time detection is done by continuously scanning active processes in the memory of a computer system and alerting a user when potentially hostile applications attempt to install and run. A user can then elect to stop the spyware from installing onto the system. Once spyware is found, a user can chose to either ignore it or attempt to remove it. In order to remove a spyware application, a tool has to undo the modifications that were made by the spyware. This involves deleting or modifying files and removing entries in the registry. Some tools can block the transmission of sensitive information across the Internet. For example, one tool allows users to input specific information that the user wants to ensure is not transmitted (e.g., credit card number) by an unauthorized source. The tool then monitors Internet traffic and will warn a user if a program attempts to send the information. Antispyware solutions cannot always defend against the threat of spyware unless they have prior knowledge of its existence and also the required frequent updating for signature files. Even then, antispyware tools vary in their effectiveness to detect, block, and remove spyware. For example, one tool that prevents installed spyware from launching does not actually remove the spyware from the system. NIST recommends that organizations consider using antispyware tools from multiple vendors. DHS issues a variety of publications related to cybersecurity threats and vulnerabilities on the US-CERT Web site (www.us-cert.gov). The following table summarizes selected publications that are relevant to the emerging cybersecurity threats of spam, phishing, and spyware. J. Paul Nicholas, Assistant Director, (202) 512-4457, nicholasj@gao.gov. In addition to the individual named above, Scott Borre, Carolyn Boyce, Season Dietrich, Neil Doherty, Michael Fruitman, Richard Hung, Min Hyun, Anjalique Lawrence, Tracy Pierson, and David Plocher made key contributions to this report.
Federal agencies are facing a set of emerging cybersecurity threats that are the result of increasingly sophisticated methods of attack and the blending of once distinct types of attack into more complex and damaging forms. Examples of these threats include spam (unsolicited commercial e-mail), phishing (fraudulent messages to obtain personal or sensitive data), and spyware (software that monitors user activity without user knowledge or consent). To address these issues, GAO was asked to determine (1) the potential risks to federal systems from these emerging cybersecurity threats, (2) the federal agencies' perceptions of risk and their actions to mitigate them, (3) federal and private-sector actions to address the threats on a national level, and (4) governmentwide challenges to protecting federal systems from these threats. Spam, phishing, and spyware pose security risks to federal information systems. Spam consumes significant resources and is used as a delivery mechanism for other types of cyberattacks; phishing can lead to identity theft, loss of sensitive information, and reduced trust and use of electronic government services; and spyware can capture and release sensitive data, make unauthorized changes, and decrease system performance. The blending of these threats creates additional risks that cannot be easily mitigated with currently available tools. Agencies' perceptions of the risks of spam, phishing, and spyware vary. In addition, most agencies were not applying the information security program requirements of the Federal Information Security Management Act of 2002 (FISMA) to these emerging threats, including performing risk assessments, implementing effective mitigating controls, providing security awareness training, and ensuring that their incident-response plans and procedures addressed these threats. Several entities within the federal government and the private sector have begun initiatives to address these emerging threats. These efforts range from educating consumers to targeting cybercrime. Similar efforts are not, however, being made to assist and educate federal agencies. Although federal agencies are required to report incidents to a central federal entity, they are not consistently reporting incidents of emerging cybersecurity threats. Pursuant to FISMA, the Office Management and Budget (OMB) and the Department of Homeland Security (DHS) share responsibility for the federal government's capability to detect, analyze, and respond to cybersecurity incidents. However, governmentwide guidance has not been issued to clarify to agencies which incidents they should be reporting, as well as how and to whom they should report. Without effective coordination, the federal government is limited in its ability to identify and respond to emerging cybersecurity threats, including sophisticated and coordinated attacks that target multiple federal entities.
First, let me summarize the findings of GAO’s January 2005 report that discusses the progress the federal government has made over the last 5 years and key challenges it faces in developing and implementing a long- term response to wildland fire problems. This report is based primarily on over 25 reviews we conducted in recent years of federal wildland fire management that focused largely on the activities of the Forest Service in the Department of Agriculture and the land management agencies in the Department of the Interior, which together manage about 95 percent of all federal lands. Wildland fire triggered by lightning is a normal, inevitable, and necessary ecological process that nature uses to periodically remove excess undergrowth, small trees, and vegetation to renew ecosystem productivity. However, various human land use and management practices, including several decades of fire suppression activities, have reduced the normal frequency of wildland fires in many forest and rangeland ecosystems and have resulted in abnormally dense and continuous accumulations of vegetation that can fuel uncharacteristically large and intense wildland fires. Such large intense fires increasingly threaten catastrophic ecosystem damage and also increasingly threaten human lives, health, property, and infrastructure in the wildland-urban interface. Federal researchers estimate that vegetative conditions that can fuel such fires exist on approximately 190 million acres––or more than 40 percent––of federal lands in the contiguous United States but could vary from 90 million to 200 million acres, and that these conditions also exist on many nonfederal lands. Our reviews over the last 5 years identified several weaknesses in the federal government’s management response to wildland fire issues. These weaknesses included the lack of a national strategy that addressed the likely high costs of needed fuel reduction efforts and the need to prioritize these efforts. Our reviews also found shortcomings in federal implementation at the local level, where over half of all federal land management units’ fire management plans did not meet agency requirements designed to restore fire’s natural role in ecosystems consistent with human health and safety. These plans are intended to identify needed local fuel reduction, preparedness, suppression, and rehabilitation actions. The agencies also lacked basic data, such as the amount and location of lands needing fuel reduction, and research on the effectiveness of different fuel reduction methods on which to base their fire management plans and specific project decisions. Furthermore, coordination among federal agencies and collaboration between these agencies and nonfederal entities were ineffective. This kind of cooperation is needed because wildland fire is a shared problem that transcends land ownership and administrative boundaries. Finally, we found that better accountability for federal expenditures and performance in wildland fire management was needed. Agencies were unable to assess the extent to which they were reducing wildland fire risks or to establish meaningful fuel reduction performance measures, as well as to determine the cost- effectiveness of these efforts, because they lacked both monitoring data and sufficient data on the location of lands at high risk of catastrophic fires to know the effects of their actions. As a result, their performance measures created incentives to reduce fuels on all acres, as opposed to focusing on high-risk acres. Because of these weaknesses, and because experts said that wildland fire problems could take decades to resolve, we said that a cohesive, long- term, federal wildland fire management strategy was needed. We said that this cohesive strategy needed to focus on identifying options for reducing fuels over the long term in order to decrease future wildland fire risks and related costs. We also said that the strategy should identify the costs associated with those different fuel reduction options over time, so that the Congress could make cost-effective, strategic funding decisions. The federal government has made important progress over the last 5 years in improving its management of wildland fire. Nationally it has established strategic priorities and increased resources for implementing these priorities. Locally, it has enhanced data and research, planning, coordination, and collaboration with other parties. With regard to accountability, it has improved performance measures and established a monitoring framework. Over the last 5 years, the federal government has been formulating a national strategy known as the National Fire Plan, composed of several strategic documents that set forth a priority to reduce wildland fire risks to communities. Similarly, the recently enacted Healthy Forests Restoration Act of 2003 directs that at least 50 percent of funding for fuel reduction projects authorized under the act be allocated to wildland-urban interface areas. While we have raised concerns about the way the agencies have defined these areas and the specificity of their prioritization guidance, we believe that the act’s clarification of the community protection priority provides a good starting point for identifying and prioritizing funding needs. Similarly, in contrast to fiscal year 1999, when we reported that the Forest Service had not requested increased funding to meet the growing fuel reduction needs it had identified, fuel reduction funding for both the Forest Service and Interior quadrupled by fiscal year 2004. The Congress, in the Healthy Forests Restoration Act, also authorized $760 million per year to be appropriated for hazardous fuels reduction activities, including projects for reducing fuels on up to 20 million acres of land. Moreover, appropriations for both agencies’ overall wildland fire management activities, including preparedness, suppression, and rehabilitation, have nearly tripled, from about $1 billion in fiscal year 1999 to over $2.7 billion in fiscal year 2004. The agencies have strengthened local wildland fire management implementation by making significant improvements in federal data and research on wildland fire over the past 5 years, including an initial mapping of fuel hazards nationwide. Additionally, in 2003, the agencies approved funding for development of a geospatial data and modeling system, called LANDFIRE, to map wildland fire hazards with greater precision and uniformity. LANDFIRE—estimated to cost $40 million and scheduled for nationwide implementation in 2009––will enable comparisons of conditions between different field locations nationwide, thus permitting better identification of the nature and magnitude of wildland fire risks confronting different community and ecosystem resources, such as residential and commercial structures, species habitat, air and water quality, and soils. The agencies also have improved local fire management planning by adopting and executing an expedited schedule to complete plans for all land units that had not been in compliance with agency requirements. The agencies also adopted a common interagency template for preparing plans to ensure greater consistency in their contents. Coordination among federal agencies and their collaboration with nonfederal partners, critical to effective implementation at the local level, also has been improved. In 2001, as a result of congressional direction, the agencies jointly formulated a 10-Year Comprehensive Strategy with the Western Governors’ Association to involve the states as full partners in their efforts. An implementation plan adopted by the agencies in 2002 details goals, time lines, and responsibilities of the different parties for a wide range of activities, including collaboration at the local level to identify fuel reduction priorities in different areas. Also in 2002, the agencies established an interagency body, the Wildland Fire Leadership Council, composed of senior Agriculture and Interior officials and nonfederal representatives, to improve coordination of their activities with each other and nonfederal parties. Accountability for the results the federal government achieves from its investments in wildland fire management activities also has been strengthened. The agencies have adopted a performance measure that identifies the amount of acres moved from high-hazard to low-hazard fuel conditions, replacing a performance measure for fuel reductions that measured only the total acres of fuel reductions and created an incentive to treat less costly acres rather than the acres that presented the greatest hazards. Additionally, in 2004, to have a better baseline for measuring progress, the Wildland Fire Leadership Council approved a nationwide framework for monitoring the effects of wildland fire. While an implementation plan is still needed for this framework, it nonetheless represents a critical step toward enhancing wildland fire management accountability. While the federal government has made important progress over the past 5 years in addressing wildland fire, a number of challenges still must be met to complete development of a cohesive strategy that explicitly identifies available long-term options and funding needed to reduce fuels on the nation’s forests and rangelands. Without such a strategy, the Congress will not have an informed understanding of when, how, and at what cost wildland fire problems can be brought under control. None of the strategic documents adopted by the agencies to date have identified these options and related funding needs, and the agencies have yet to delineate a plan or schedule for doing so. To identify these options and funding needs, the agencies will have to address several challenging tasks related to their data systems, fire management plans, and assessing the cost-effectiveness and affordability of different options for reducing fuels. The agencies face several challenges to completing and implementing LANDFIRE, so that they can more precisely identify the extent and location of wildland fire threats and better target fuel reduction efforts. These challenges include using LANDFIRE to better reconcile the effects of fuel reduction activities with the agencies’ other stewardship responsibilities for protecting ecosystem resources, such as air, water, soils, and species habitat, which fuel reduction efforts can adversely affect. The agencies also need LANDFIRE to help them better measure and assess their performance. For example, the data produced by LANDFIRE will help them devise a separate performance measure for maintaining conditions on low-hazard lands to ensure that their conditions do not deteriorate to more hazardous conditions while funding is being focused on lands with high-hazard conditions. In implementing LANDFIRE, however, the agencies will have to overcome the challenges presented by the current lack of a consistent approach to assessing the risks of wildland fires to ecosystem resources as well as the lack of an integrated, strategic, and unified approach to managing and using information systems and data, including those such as LANDFIRE, in wildland fire decision making. Currently, software, data standards, equipment, and training vary among the agencies and field units in ways that hamper needed sharing and consistent application of the data. Also, LANDFIRE data and models may need to be revised to take into account recent research findings that suggest part of the increase in wildland fire in recent years has been caused by a shift in climate patterns. This research also suggests that these new climate patterns may continue for decades, resulting in further increases in the amount of wildland fire. Thus, the nature, extent, and geographical distribution of hazards initially identified in LANDFIRE, as well as the costs for addressing them, may have to be reassessed. The agencies will need to update their local fire management plans when more detailed, nationally consistent LANDFIRE data become available. The plans also will have to be updated to incorporate recent agency fire research on approaches to more effectively address wildland fire threats. For example, a 2002 interagency analysis found that protecting wildland- urban interface communities more effectively—as well as more cost- effectively—might require locating a higher proportion of fuel reduction projects outside of the wildland-urban interface than currently envisioned, so that fires originating in the wildlands do not become too large to suppress by the time they arrive at the interface. Moreover, other agency research suggests that placing fuel reduction treatments in specific geometric patterns may, for the same cost, provide protection for up to three times as many community and ecosystem resources as do other approaches, such as placing fuel breaks around communities and ecosystems resources. Timely updating of fire management plans with the latest research findings on optimal design and location of treatments also will be critical to the effectiveness and cost-effectiveness of these plans. The Forest Service indicated that this updating could occur during annual reviews of fire management plans to determine whether any changes to them may be needed. Completing the LANDFIRE data and modeling system and updating fire management plans should enable the agencies to formulate a range of options for reducing fuels. However, to identify optimal and affordable choices among these options, the agencies will have to complete certain cost-effectiveness analysis efforts they currently have under way. These efforts include an initial 2002 interagency analysis of options and costs for reducing fuels, congressionally-directed improvements to their budget allocation systems, and a new strategic analysis framework that considers affordability. The Interagency Analysis of Options and Costs: In 2002, a team of Forest Service and Interior experts produced an estimate of the funds needed to implement eight different fuel reduction options for protecting communities and ecosystems across the nation over the next century. Their analysis also considered the impacts of fuels reduction activities on future costs for other principal wildland fire management activities, such as preparedness, suppression, and rehabilitation, if fuels were not reduced. The team concluded that the option that would result in reducing the risks to communities and ecosystems across the nation could require an approximate tripling of current fuel reduction funding to about $1.4 billion for an initial period of a few years. These initially higher costs would decline after fuels had been reduced enough to use less expensive controlled burning methods in many areas and more fires could be suppressed at lower cost, with total wildland fire management costs, as well as risks, being reduced after 15 years. Alternatively, the team said that not making a substantial short-term investment using a landscape focus could increase both costs and risks to communities and ecosystems in the long term. More recently, however, Interior has said that the costs and time required to reverse current increasing risks may be less when other vegetation management activities—such as timber harvesting and habitat improvements—are considered that were not included in the interagency team’s original assessment but also can influence wildland fire. The cost of the 2002 interagency team’s option that reduced risks to communities and ecosystems over the long term is consistent with a June 2002 National Association of State Foresters’ projection of the funding needed to implement the 10-Year Comprehensive Strategy developed by the agencies and the Western Governors’ Association the previous year. The state foresters projected a need for steady increases in fuel reduction funding up to a level of about $1.1 billion by fiscal year 2011. This is somewhat less than that of the interagency team’s estimate, but still about 2-1/2 times current levels. The interagency team of experts who prepared the 2002 analysis of options and associated costs said their estimates of long-term costs could only be considered an approximation because the data used for their national-level analysis were not sufficiently detailed. They said a more accurate estimate of the long-term federal costs and consequences of different options nationwide would require applying this national analysis framework in smaller geographic areas using more detailed data, such as that produced by LANDFIRE, and then aggregating these smaller-scale results. The New Budget Allocation System: Agency officials told us that a tool for applying this interagency analysis at a smaller geographic scale for aggregation nationally may be another management system under development—the Fire Program Analysis system. This system, being developed in response to congressional committee direction to improve budget allocation tools, is designed to identify the most cost-effective allocations of annual preparedness funding for implementing agency field units’ local fire management plans. Eventually, the Fire Program Analysis system, being initially implemented in 2005, will use LANDFIRE data and provide a smaller geographical scale for analyses of fuel reduction options and thus, like LANDFIRE, will be critical for updating fire management plans. Officials said that this preparedness budget allocation systemwhen integrated with an additional component now being considered for allocating annual fuel reduction funding—could be instrumental in identifying the most cost-effective long-term levels, mixes, and scheduling of these two wildland fire management activities. Completely developing the Fire Program Analysis system, including the fuel reduction funding component, is expected to cost about $40 million and take until at least 2007 and perhaps until 2009. The New Strategic Analysis Effort: In May 2004, Agriculture and Interior began the initial phase of a wildland fire strategic planning effort that also might contribute to identifying long-term options and needed funding for reducing fuels and responding to the nation’s wildland fire problems. This effortthe Quadrennial Fire and Fuels Reviewis intended to result in an overall federal interagency strategic planning document for wildland fire management and risk reduction and to provide a blueprint for developing affordable and integrated fire preparedness, fuels reduction, and fire suppression programs. Because of this effort’s consideration of affordability, it may provide a useful framework for developing a cohesive strategy that includes identifying long-term options and related funding needs. The preliminary planning, analysis, and internal review phases of this effort are currently being completed and an initial report is expected in 2005. The improvements in data, modeling, and fire behavior research that the agencies have under way, together with the new cost-effectiveness focus of the Fire Program Analysis system to support local fire management plans, represent important tools that the agencies can begin to use now to provide the Congress with initial and successively more accurate assessments of long-term fuel reduction options and related funding needs. Moreover, a more transparent process of interagency analysis in framing these options and their costs will permit better identification and resolution of differing assumptions, approaches, and values. This transparency provides the best assurance of accuracy and consensus among differing estimates, such as those of the interagency team and the National Association of State Foresters. In November 2004, the Western Governors’ Association issued a report prepared by its Forest Health Advisory Committee that assessed implementation of the 10-Year Comprehensive Strategy, which the association had jointly devised with the agencies in 2001. Although the association’s report had a different scope than our review, its findings and recommendations are, nonetheless, generally consistent with ours about the progress made by the federal government and the challenges it faces over the next 5 years. In particular, it recommends, as we do, completion of a long-term federal cohesive strategy for reducing fuels. It also cites the need for continued efforts to improve, among other things, data on hazardous fuels, fire management plans, the Fire Program Analysis system, and cost-effectiveness in fuel reductions––all challenges we have emphasized today. The progress made by the federal government over the last 5 years has provided a sound foundation for addressing the problems that wildland fire will increasingly present to communities, ecosystems, and federal budgetary resources over the next few years and decades. But, as yet, there is no clear single answer about how best to address these problems in either the short or long term. Instead, there are different options, each needing further development to understand the trade-offs among the risks and funding involved. The Congress needs to understand these options and trade-offs in order to make informed policy and appropriations decisions on this 21st century challenge. This is the same message we provided in 1999 when we first called for development of a cohesive strategy identifying options and funding needs. But it still has not been completed. While the agencies are now in a better position to do so, they must build on the progress made to date by completing data and modeling efforts underway, updating their fire management plans with the results of these data efforts and ongoing research, and following through on recent cost-effectiveness and affordability initiatives. However, time is running out. Further delay in completing a strategy that cohesively integrates these activities to identify options and related funding needs will only result in increased long-term risks to communities, ecosystems, and federal budgetary resources. Because there is an increasingly urgent need for a cohesive federal strategy that identifies long-term options and related funding needs for reducing fuels, we have recommended that the Secretaries of Agriculture and the Interior provide the Congress, in time for its consideration of the agencies’ fiscal year 2006 wildland fire management budgets, with a joint tactical plan outlining the critical steps the agencies will take, together with related time frames, to complete such a cohesive strategy. In an April 2005 letter, Agriculture and Interior said that they will produce by August 2005, for the Wildland Fire Leadership Council’s review and approval, a .joint tactical plan that will identify the steps and time frames for developing a cohesive strategy. Next, I would like to summarize the findings of our second report, being released today, that discusses ways to help protect homes and improve communications during wildland fires. Although wildland fire is a natural process that plays an important role in the health of many fire-adapted ecosystems, it has the potential to damage or destroy homes located in or near these wildlands, in the area commonly called the wildland-urban interface. Since 1984, wildland fires have burned an average of 850 homes each year in the United States, according to the National Fire Protection Association. However, losses since 2000 have risen to an average of 1,100 homes annually. In 2003, more than 3,600 homes were destroyed by wildland fires in Southern California and resulted in more than $2 billion in insured losses. Many homes are located in the wildland-urban interface nationwide, and the number is growing, although the risk to these homes from wildland fire varies widely. In California, for example, an estimated 4.9 million of the state’s 12 million housing units are located in or near the wildlands, and 3.2 million of these are at significant risk from wildland fire. As people continue to move to areas in or near fire-prone wildlands, the number of homes at risk from wildland fire is likely to grow. When a large high- intensity wildland fire occurs near inhabited areas, it can threaten hundreds of homes at the same time and overwhelm available firefighting resources. Homeowners can play an important role in protecting their homes from a wildland fire, however, by taking preventive steps to reduce their home’s ignition potential. These preventive measures can significantly improve a home’s chance of surviving a wildland fire, even without intervention by firefighting agencies. Once a wildland fire starts, many different agencies may assist in the efforts to manage or suppress it, including the Forest Service (within the Department of Agriculture); land management agencies in the Department of the Interior; state forestry agencies; local fire departments; private contract firefighting crews; and, in some cases, the military. Effective communications among responders—commonly called communications interoperability—is essential to fighting wildland fires successfully and ensuring both firefighter and public safety. Communications interoperability can be hampered because the various agencies responding to a fire may communicate over different radio frequency bands or with incompatible communications equipment. My testimony today summarizes key findings from our report released today and addresses: (1) measures that can help protect structures from wildland fires, (2) factors affecting the use of these protective measures, and (3) the role that technology plays in improving firefighting agencies’ ability to communicate during wildland fires. To understand how preventive steps can help protect homes from wildland fire requires an understanding of what wildland fire is, how it spreads, and how it can threaten homes. Fire requires three elements— oxygen, heat, and fuel—to ignite and continue burning. Once a fire has begun, a number of factors—including weather conditions and the type of nearby vegetation or other fuels—influence how fast and how intensely the fire spreads. Any combustible object in a fire’s path, including homes, can fuel a wildland fire. In fact, homes can sometimes be more flammable than the trees, shrubs, or other vegetation surrounding them. If any one of the three required elements are removed, however, such as when firefighters remove vegetation and other fuels from a strip of land near a fire—called a fire break—a fire will normally become less intense and eventually die out. Wildland fire can threaten homes or other structures in the following ways: Surface fires burn vegetation or other fuels near the surface of the ground, such as shrubs, fallen leaves, small branches, and roots. These fires can ignite a home by burning nearby vegetation and eventually igniting flammable portions of the home, including exterior walls or siding; attached structures, such as a fence or deck; or other flammable materials, such as firewood or patio furniture. Crown fires burn the tops, or crowns, of trees. Crown fires normally begin as surface fires and move up the trees by burning “ladder fuel,” such as nearby shrubs or low tree branches. Crown fires create intense heat and if close enough—within approximately 100 feet—can ignite portions of structures even without direct contact from flames. Spot fires are started by embers, or “firebrands,” that can be carried a mile or more away from the main fire, depending on wind conditions. Firebrands can ignite a structure by landing on the roof or by entering a vent or other opening and may accumulate on or near homes. Firebrands can start many new spot fires or ignite many homes simultaneously, increasing the complexity of firefighting efforts. Recognizing that during severe wildland fires, suppression efforts alone cannot protect all homes threatened by wildland fire, firefighting and community officials are increasing their emphasis on preventive approaches that help reduce the chance that wildland fires will ignite homes and other structures. Because the vast majority of structures damaged or destroyed by wildland fires are located on private property, the primary responsibility for taking adequate steps to minimize or prevent damage from a wildland fire rests with the property owner and with state and local governments that can establish building requirements and land- use restrictions. When a wildland fire occurs, personnel from firefighting and other emergency agencies responding to it primarily use land mobile radio systems for communications. These systems include mobile radios in vehicles and handheld portable radios and operate using radio signals, which travel through space in the form of waves. These waves vary in length, and each wavelength is associated with a particular radio frequency. Radio frequencies are grouped into bands. Of the more than 450 frequency bands in the radio spectrum, 10, scattered across the spectrum, are allocated to public safety agencies. A firefighting or public safety agency typically uses a radio frequency band appropriate for its locale, either rural or urban. Bands at the lower end of the radio spectrum, such as VHF (very high frequency), work well in rural areas where radio signals can travel long distances without obstruction from buildings or other structures. Federal firefighting agencies, such as the Forest Service, and many state firefighting agencies operate radios in the VHF band. In urban areas, firefighting and other public safety agencies may operate radios on higher frequencies, such as those in the UHF (ultrahigh frequency) or 800 MHz bands, because these frequencies can provide better communications capabilities for an urban setting. When federal, state, and local emergency response agencies work together, for example to fight a fire in the wildland-urban interface, they may not be able to communicate with one another because they operate in different bands along the radio frequency spectrum. Managing vegetation and reducing or eliminating flammable objects— often called defensible space—within 30 to 100 feet of a structure is a key protective measure. Creating such defensible space offers protection by breaking up continuous fuels that could otherwise allow a surface fire to contact and ignite a structure. Defensible space also offers protection against crown fires. Reducing the density of large trees around structures decreases the intensity of heat from a fire, thus preventing or reducing the chance of ignition and damage to structures. Analysis of homes burned during wildland fires has shown defensible space to be a key determinant of whether a home survives. For instance, the 1981 Atlas Peak Fire in California damaged or destroyed 91out of 111 structures that lacked adequate defensible space but only 5 structures out of 111 that had it. The use of fire-resistant roofs and vents is also important in protecting structures from wildland fires. Many structures are damaged or destroyed by firebrands that can travel a mile or more from the main fire. Firebrands can land on a roof or enter a home through an opening, such as an attic vent and ignite a home hours after the fire has passed. Fire-resistant roofing materials can reduce the risk that these firebrands will ignite a roof, and vents can be screened with mesh to prevent firebrands from entering and igniting attics. Combining fire-resistant roofs and vents with the creation of defensible space is particularly effective, because together these measures reduce the risk from surface fires, crown fires, and firebrands. Other technologies can also help protect individual structures from wildland fires. Fire-resistant windows constructed of double-paned glass, tempered glass, or glass block help protect a structure from wildland fire by reducing the risk of the window breaking and allowing fire to enter the structure. Fire-resistant building materials—such as fiber-cement, brick, stone, metal, and stucco—can be used for walls, siding, decks, and doors to help prevent ignition and subsequent damage from wildland fire. Chemical agents, such as foams and gels, are temporary protective measures that can be applied as an exterior coating shortly before a wildland fire reaches a structure. Although these agents have successfully been used to protect homes, such as during the Southern California fires in 2003, they require that someone be available to apply them and, possibly, reapply or rewet them to ensure they remain effective. They can also be difficult to clean up. Sprinkler systems, which can be installed inside or outside a structure, lower the risk of ignition or damage from wildland fires. Sprinklers, however, require reliable sources of water and, in some cases, electricity to be effective. According to firefighting officials, adequate water and electricity may not be available during a wildland fire. In addition to technologies aimed at protecting individual structures, technologies also exist or are being developed which can help reduce the risk of wildland fire damage to an entire community. GIS is a computer-based information system that can be used to efficiently store, analyze, and display multiple forms of information on a single map. GIS technologies allow fire officials and local and regional land managers to combine vegetation, fuel, and topography data into separate layers of a single GIS map to identify and prioritize areas needing vegetation management. State and county officials we met with emphasized the value of GIS in community-planning efforts to protect structures and communities from wildland fire damage within their jurisdictions. Fire behavior modeling has been used to predict wildland fire behavior, but these models do not accurately predict fire behavior in the wildland- urban interface. Existing models can help identify areas likely to experience intense wildland fires, identify suitable locations for vegetation management, predict the effect of vegetation treatments on fire behavior, and aid suppression by predicting the overall behavior of a given fire. These models do not, however, consider the effect that structures and landscaping have on wildland fire behavior. Automated detection systems use infrared, ultraviolet, or temperature- sensitive sensors placed around a community, or an individual home, to detect the presence of a wildland fire. On detecting a fire, a sensor could set off an audible alarm or could be connected via radio or satellite to a device that would notify homeowners or emergency personnel. Several such sensors could be networked together to provide broad coverage of the area surrounding a community. According to fire officials, sensor systems may prove particularly helpful in protecting communities in areas of rugged terrain or poor access where wildland fires might be difficult to locate. These systems are still in development, however, and false alarms are a concern. Many homeowners have not used protective measures—such as creating and maintaining defensible space—for four primary reasons: Time or expense. State and local fire officials estimate that the price of creating defensible space can range from negligible, in cases where homeowners perform the work themselves, to $2,000 or more. Moreover, defensible space needs to be maintained, resulting in additional effort or expense in the future. Further, while fire-resistant roofing materials are available that are comparable in cost to more flammable options and, for a home under construction may result in no additional expense, replacing a roof on an existing home can cost thousands of dollars. Competing concerns. Although modifying landscaping to create defensible space has proven to be a key element in protecting structures from wildland fire, officials and researchers have reported that some homeowners are more concerned about the effect landscaping has on the appearance and privacy of their property, as well as on habitat for wildlife. Misconceptions about wildland fire behavior. Fire officials and researchers told us that some homeowners do not recognize that a structure and its surroundings constitute fuel that contributes to the spread of wildland fire or understand exactly how a wildland fire ignites structures. Further, they may not know that they can take effective steps to reduce their risk. Lack of awareness of homeowners’ responsibility. Fire officials told us that some homeowners in the wildland urban interface may expect the same level of service they received in more urban areas and do not understand that rural areas may have less firefighting personnel and equipment and longer response times. Also, when a wildland fire burns near communities, so many houses may be threatened simultaneously that firefighters may be unable to protect all of them. Federal, state, and local agencies and other organizations are taking steps in three main areas to help increase the use of protective measures. First, government agencies and other organizations are educating people about the effectiveness of simple steps they can take to reduce the risk to homes and communities. The primary national education effort is the Firewise Communities program, which both educates homeowners about available protective measures and also promotes additional steps that state and local officials can take to educate homeowners. Education efforts help demonstrate that defensible space can be attractive, provide privacy, and improve wildlife habitat. Second, some federal, state, and local agencies are directly assisting homeowners in creating defensible space by providing equipment or financial assistance to reduce fuels near structures. Under the National Fire Plan, for instance, federal firefighting agencies provide grants or otherwise assist in reducing fuels on private land. State and local governments have provided similar assistance. Third, some state and local governments have adopted laws that require maintaining defensible space around structures or the use of fire-resistant building materials. For example, California requires the creation and maintenance of defensible space around homes and the use of fire- resistant roofing materials in certain at-risk areas. Officials of one county we visited attributed the relatively few houses damaged by the 2003 Southern California fires in the county, in part, to its adoption and enforcement of laws requiring defensible space and the use of fire- resistant building materials. Not all states or localities at risk of wildland fire, however, have required such steps. Some state and local officials told us that laws had not been adopted because homeowners and developers resisted them. Furthermore, to be effective, laws that have been adopted must be enforced, and this does not always happen. Technologies are available or under development to help improve communications interoperability so that personnel from different public safety agencies responding to an emergency, such as a wildland fire, can communicate effectively with one another. Short-term, or patchwork, interoperability solutions use technology to interconnect two or more disparate radio systems so that voice or data from one system can be made available to all systems. The principal advantage of this solution is that agencies can continue to use existing communications systems, an important consideration when funds to buy new equipment are limited. Patchwork solutions include the following: Audio switches that provide interoperability by connecting radio and other communications systems to a device that sends the audio signal from one agency’s radio to all other connected radio systems. Audio switches can interconnect several different radio systems, regardless of the frequency bands or type of equipment used. Crossband repeaters that provide interoperability between systems operating on different radio frequency bands by changing frequencies between the two radio systems. Console-to-console patches that are not “on-the-scene” devices but instead connect consoles located at the dispatch centers where calls for assistance are received. The device links the dispatch consoles of two radio systems so that the radios connected to each system can communicate with one another. Other interoperability solutions involve developing and adopting more sophisticated radio or communications systems that follow common standards or can be programmed to work on any frequency and to use any desired modulation type, such as AM or FM. These include: Project 25 radios, which must meet a set of standards for digital two- way radio systems that allow for interoperability between all jurisdictions using these systems. These radios are beginning to be adopted by a variety of federal, state, and local agencies. Software-defined radios that will allow interoperability among agencies using different frequency bands, proprietary systems from different manufacturers, or different modulation types (such as AM or FM). Software-defined radios, however, are still being developed and are not yet available for use by public safety agencies. Voice over Internet Protocol that treats both voice and data as digital information and enables their movement over any existing Internet Protocol data network. No standards exist for radio communications using Voice over Internet Protocol, and, as a result, manufacturers have produced proprietary systems that may not be interoperable. Whether the solution is a short-term patchwork approach or a long-term communications upgrade, officials we spoke with explained that planning and coordination among agencies are critical for successfully determining which technology to adopt and for agreeing on funding sources, timing, training, maintenance, and other key operational and management issues. State and local governments play an important role in developing and implementing plans for interoperable communications because they own most of the physical infrastructure for public safety systems, such as radios, base stations, repeaters, and other equipment. In the past, public safety agencies have depended on their own stand-alone communications systems, without considering interoperability with other agencies. Yet as firefighting and other public safety agencies increasingly work together to respond to emergencies, including wildland fires, personnel from different agencies need to be able to communicate with one another. Reports by GAO, the National Task Force on Interoperability, and others have identified lack of planning and coordination as key reasons hampering communications interoperability among responding agencies. According to these reports, federal, state, and local government agencies have not worked together to identify their communications needs and develop a coordinated plan to meet them. Without such planning and coordination, new investments in communications equipment or infrastructure may not improve the effectiveness of communications among agencies. In recent years, the federal government, as well as several states and local jurisdictions, have focused increased attention on improving planning and coordination to achieve communications interoperability. The Wireless Public Safety Interoperable Communications Program (SAFECOM), within the Department of Homeland Security’s Office of Interoperability and Compatibility, was established to address public safety communications issues within the federal government and to help state, local, and tribal public safety agencies improve their responses through more effective and efficient interoperable wireless communications. SAFECOM has undertaken a number of initiatives to enhance communications interoperability. For example, in a joint project with the commonwealth of Virginia, SAFECOM developed a methodology that could be used by states to assist them in developing a locally driven statewide strategic plan for enhancing communications interoperability. Several states have established statewide groups to address communications interoperability. For example, in Washington, the communications committee has developed a statewide public safety communication plan and an inventory of state government-operated public safety communications systems. Finally, some local jurisdictions are working together to identify and address communications interoperability issues. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact me at (202) 512-3841 or nazzaror@gao.gov, or Keith Rhodes at (202) 512-6412 or rhodesk@gao.gov. Individuals making key contributions to this testimony included Jonathan Altshul, Naba Barkakati, David P. Bixler, William Carrigg, Ellen Chu, Jonathan Dent, Janet Frisch, Barry T. Hill, Richard Johnson, Chester Joy, Nicholas Larson, Steve Secrist, and Amy Webbink. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Wildland fires are increasingly threatening communities and ecosystems. In recent years, they have become more intense due to excess vegetation that has accumulated, partly as a result of past suppression efforts. The cost to suppress these fires is increasing and, as more people move into fire-prone areas near wildlands, the number of homes at risk is growing. During these wildland fires, effective communications among the public safety agencies responding from various areas is critical, but can be hampered by incompatible radio equipment. This testimony discusses (1) progress made and future challenges to managing wildland fire, (2) measures to help protect structures, and (3) the role of technology in improving responder communications during fires. It is based on two GAO reports: Wildland Fire Management: Important Progress Has Been Made, but Challenges Remain to Completing a Cohesive Strategy ( GAO-05-147 , Jan. 14, 2005) and Technology Assessment: Protecting Structures and Improving Communications during Wildland Fires ( GAO-05-380 , Apr. 26, 2005). Over the last 5 years, the Forest Service in the Department of Agriculture and land management agencies in the Department of the Interior, working with the Congress, have made important progress in responding to wildland fires. Most notably, the agencies have adopted various national strategy documents addressing the need to reduce wildland fire risks, established a priority to protect communities in the wildland-urban interface, and increased efforts and amounts of funding committed to addressing wildland fire problems. However, despite producing numerous planning and strategy documents, the agencies have yet to develop a cohesive strategy that identifies the long-term options and related funding needed to reduce excess vegetation that fuels fires in national forests and rangelands. Reducing these fuels lowers risks to communities and ecosystems and helps contain suppression costs. As GAO noted in 1999, such a strategy would help the agencies and the Congress to determine the most effective and affordable long-term approach for addressing wildland fire problems. Completing this strategy will require finishing several efforts now under way to improve a key wildland fire data and modeling system, local fire management planning, and a new system designed to identify the most cost-effective means for allocating fire management budget resources, each of which has its own challenges. Without completing these tasks, the agencies will have difficulty determining the extent and location of wildland fire threats, targeting and coordinating their efforts and resources, and resolving wildland fire problems in the most timely and cost-effective manner over the long term. The two most effective measures for protecting structures from wildland fires are (1) creating and maintaining a buffer around a structure by eliminating or reducing trees, shrubs, and other flammable objects within an area from 30 to 100 feet around the structure and (2) using fire-resistant roofs and vents. Other technologies--such as fire-resistant building materials, chemical agents, and geographic information system mapping tools--can help in protecting structures and communities, but they play a secondary role. Many homeowners, however, are not using the protective measures because of the time or expense involved, competing values or concerns, misperceptions about wildland fires, or lack of awareness of their shared responsibility for home protection. Federal, state, and local governments and others are attempting to address this problem through a variety of educational, financial assistance, and regulatory efforts. Technologies exist and others are being developed to address communications problems among emergency responders using different radio frequencies or equipment. However, technology alone cannot solve this problem. Effective adoption of these technologies requires planning and coordination among federal, state, and local agencies involved. The Department of Homeland Security, as well as several states and local jurisdictions, are pursuing initiatives to improve communications.
The southwestern borderlands region contains many federally managed lands and also accounts for over 97 percent of all apprehensions of undocumented aliens by Border Patrol. Over 40 percent of the United States-Mexico border, or 820 linear miles, is managed by Interior’s land management agencies and the Forest Service. Each of these land management agencies has a distinct mission and set of responsibilities: The Bureau of Land Management manages federal land for multiple uses, including recreation; range; timber; minerals; watershed; wildlife and fish; natural scenic, scientific, and historical values; and the sustained yield of renewable resources. The Park Service conserves the scenery, natural and historical objects, and wildlife of the national park system so they will remain unimpaired for the enjoyment of this and future generations. The Fish and Wildlife Service preserves and enhances fish, wildlife, plants, and their habitats, primarily in national wildlife refuges. The Forest Service manages lands for multiple uses, such as timber, recreation, and watershed management and to sustain the health, diversity, and productivity of the nation’s forests and grasslands to meet the needs of present and future generations. Border Patrol’s mission is defined by the Immigration and Nationality Act, as amended, which gives the Secretary of Homeland Security the power and duty to control and guard the boundaries and borders of the United States against the illegal entry of people who are not citizens or nationals. To fulfill this mission, Border Patrol agents patrol federal and nonfederal lands near the border to find and apprehend persons who have illegally crossed the U.S. border. Agents carry out this mission primarily between ports of entry, located in cities such as El Paso, Texas, and San Ysidro, California, and have the authority to search, interrogate, and arrest undocumented aliens and others who are engaging in illegal activities, such as illegal entry and smuggling of people, drugs, or other contraband. Border Patrol is organized into nine sectors along the southwestern border. Within each sector, there are stations with responsibility for defined geographic areas. Of the 41 stations in the borderlands region in the 9 southwestern border sectors, 26 have primary responsibility for the security of federal lands in the borderlands region, according to Border Patrol sector officials. Apprehensions of undocumented aliens along the southwestern border increased steadily through the late 1990s, reaching a peak of 1,650,000 in fiscal year 2000. Since fiscal year 2006, apprehensions have declined, reaching a low of 540,000 in fiscal year 2009. This decrease has occurred along the entire border, with every sector reporting fewer apprehensions in fiscal year 2009 than in fiscal year 2006. The Tucson Sector, however, with responsibility for central and eastern Arizona, continues to have the largest number of apprehensions (see fig. 2). Border Patrol shares with land managers data on apprehensions and drug seizures occurring on federal land, providing such information in several ways, including in regularly occurring meetings and e-mailed reports. Border Patrol measures its effectiveness at detecting and apprehending undocumented aliens by assessing the border security status for a given area. The two highest border security statuses—“controlled” and “managed”—are levels at which Border Patrol claims the capability to consistently detect entries when they occur; identify what the entry is and classify its level of threat (such as who is entering, what the entrants are doing, and how many entrants there are); effectively and efficiently respond to the entry; and bring the situation to an appropriate law enforcement resolution, such as an arrest. Areas deemed either “controlled” or “managed” are considered by Border Patrol to be under “operational control.” Patrol agents-in-charge of Border Patrol stations aim to achieve operational control of their jurisdictions by deploying a mix of personnel, technology, and tactical infrastructure, such as vehicle and pedestrian fences, in urban and rural areas along the border. These activities are part of DHS’s Secure Border Initiative—a multiyear, multibillion dollar program aimed at securing U.S. borders and reducing illegal immigration. Since the program began in 2005, Border Patrol has nearly doubled the number of agents along the northern and southern U.S. borders to 20,200, with more than 17,000 agents (85 percent) on the southwestern border. According to Tucson Sector Border Patrol officials, having more agents has allowed the agency to patrol additional areas, such as remote federal lands. As part of routine operations to detect undocumented aliens, agents in remote areas typically travel on roads near the border—generally those that parallel the border east to west—several times a day in search of signs of illegal traffic, such as footprints. In addition to the increase in the number of agents along the southwestern border over the last 5 years, DHS has spent about $1.6 billion to provide technological resources in the borderlands region as part of the Secure Border Initiative. These resources include surveillance technologies, such as underground sensors, cameras, and radar, among other things. For example, to assist agents in detecting illegal entries, Border Patrol uses mobile surveillance systems (see fig. 3). These systems are mounted on trucks outfitted with towers that have infrared cameras and live video feeds for detecting suspected undocumented aliens. According to Border Patrol field agents, once an entry is detected, agents monitoring a system can direct other agents to respond and apprehend the suspected undocumented aliens. As illegal traffic shifts within a station’s area of operation—such shifts can occur daily—agents can move the mobile surveillance systems as needed. In addition to increasing the number of agents and technological resources along the border, DHS has installed hundreds of miles of tactical infrastructure as part of the Secure Border Initiative. Specifically, as of April 2010, the department had completed 646 of the 652 miles of border fencing it committed to deploy along the southwestern border, including pedestrian fencing and permanent vehicle barriers (see fig. 4). According to a Tucson Sector Border Patrol official, pedestrian fencing is typically located near urban areas and is designed to prevent people on foot from crossing the border. Vehicle barriers consist of physical barriers meant to stop the entry of vehicles; almost all the fencing on federal lands along the southwestern border consists of vehicle barriers. Border Patrol’s strategy emphasizes border enforcement in urban and populated areas, which can divert large concentrations of illegal traffic to outlying areas—including federal lands—where Border Patrol believes its agents have more time to detect and apprehend undocumented aliens attempting to cross vast and remote landscapes. A consequence of this strategy, however, is an impact on natural, historic, and cultural resources on federal lands—resources that land management agencies are charged with conserving, preserving, and protecting. According to a 2003 Interior report, endangered species and their habitats are potentially being irreversibly damaged from this illegal activity. In addition to damage caused by undocumented aliens traversing environmentally sensitive lands, Border Patrol’s deployment of personnel, technology, and infrastructure resources on federal lands can also have negative impacts on certain plants and wildlife that are protected under federal law. For example, according to a Fish and Wildlife Service refuge manager in the borderlands region, when Border Patrol agents use vehicles off road to patrol or pursue suspects on federal lands, the tire tracks left by their vehicles may remain for years (see fig. 5). The tracks from these off- road incursions can disrupt water flow from slopes and mountain ranges. This runoff normally pools and provides water for vegetation, which allows wildlife to survive through hot, dry summers. With tire tracks, the water collects in the tracks instead of natural pools. As a result, pools are smaller and evaporate more quickly, leading to less vegetation, less available food, and fewer animals able to survive the summer. The number of undocumented aliens crossing federal lands along the southwestern border can overwhelm law enforcement and resource protection efforts by federal land managers, thus highlighting the need for Border Patrol’s presence on and near these lands, according to DHS and land management agency officials. The need for the presence of both kinds of agencies on these borderlands has prompted consultation among DHS, Interior, and Agriculture to facilitate coordination between Border Patrol and the land management agencies. The departments have a stated commitment to foster better communication and resolve issues and concerns linked to federal land use or resource management. When operating on federal lands, Border Patrol has responsibilities under several federal land management laws, including the National Environmental Policy Act of 1969, Wilderness Act of 1964, and Endangered Species Act of 1973, and it generally coordinates its responsibilities under these laws with land management agencies through national and local interagency agreements. Border Patrol must obtain permission or a permit from federal land management agencies before its agents can undertake certain activities on federal lands, such as maintaining roads and installing surveillance equipment. Because the land management agencies are responsible for ensuring compliance with land management laws, Border Patrol and the land management agencies have developed several mechanisms to coordinate their responsibilities. The most comprehensive of these is a national-level agreement—a memorandum of understanding signed in 2006 by the Secretaries of Homeland Security, the Interior, and Agriculture—intended to provide consistent principles to guide their agencies’ activities on federal lands. At the local level, Border Patrol and land management agencies have also coordinated their responsibilities through various local agreements. Border Patrol, like all federal agencies, must obtain permission or a permit from the appropriate federal land management agency to conduct certain activities—such as road maintenance—on federal lands. To obtain permission or a permit, Border Patrol and land management agencies must fulfill the requirements of various land management laws, including, but not limited to, the following: National Environmental Policy Act of 1969. Enacted in 1970, the National Environmental Policy Act’s purpose is to promote efforts that will prevent or eliminate damage to the environment, among other things. Section 102 requires federal agencies to evaluate the likely environmental effects of proposed projects using an environmental assessment or, if the projects would likely significantly affect the environment, a more detailed environmental impact statement evaluating the proposed project and alternatives. Environmental impact statements can be developed at either a programmatic level—where larger-scale, combined effects and cumulative effects can be evaluated and where overall management objectives, such as road access and use, are defined—or a project level, where the effects of a particular project in a specific place at a particular time are evaluated. If, however, the federal agency determines that activities of a proposed project fall within a category of activities the agency has already determined has no significant environmental effect—called a categorical exclusion—then the agency generally does not need to prepare an environmental assessment or an environmental impact statement. The agency may instead approve projects that fit within the relevant category by using one of the predetermined categorical exclusions, rather than preparing a project-specific environmental assessment or environmental impact statement. When more than two federal agencies are involved in an activity—as is the case with Border Patrol operations on federal lands—National Environmental Policy Act regulations require that a lead agency supervise the preparation of the environmental impact statement. Under a 2008 memorandum of agreement between Border Patrol and Interior’s land management agencies, Border Patrol is to be the lead agency on preparation of National Environmental Policy Act documents for all Border Patrol tactical infrastructure projects. For all other projects, such as road maintenance, Border Patrol or Interior land management agencies may be the lead, joint lead, or a cooperating agency. When Border Patrol and Interior land management agencies are joint lead agencies, they share responsibility for developing the scope and content of the environmental assessments and environmental impact statements. When either agency is a cooperating agency, it can develop its own environmental assessment or environmental impact statement or adopt the one developed by the lead agency if the cooperating agency reviews it and finds that its comments and suggestions have been satisfied. Once the lead and cooperating agencies agree on a draft environmental impact statement, a notice of its availability is published in the Federal Register and it is made available for public notice and comment for at least 45 days. The agencies are to then prepare a final environmental impact statement and publish a notice of its availability in the Federal Register. At least 30 days after the notice of availability for the final environmental impact statement is published, the lead agency must publish a record of its decision, describing how the findings of the environmental impact statement were incorporated into the agency’s decision-making process. Figure 6 illustrates the process for implementing National Environmental Policy Act requirements. National Historic Preservation Act of 1966. The National Historic Preservation Act provides for the protection of historic properties—any prehistoric or historic district, site, building, structure, object, or properties of traditional religious and cultural importance to an Indian tribe, included, or eligible for inclusion in, the National Register of Historic Places. For all projects receiving federal funds or a federal permit, section 106 of the act requires federal agencies to take into account a project’s effect on any historic property. In accordance with regulations implementing the act, Border Patrol and land management agencies often incorporate compliance with the National Historic Preservation Act into their required evaluations of a project’s likely environmental effects under the National Environmental Policy Act. Thus, the lead agency or agencies on Border Patrol’s proposed projects or activities on federal lands must determine, by consulting with relevant federal, state, and tribal officials, whether a project or activity has the potential to affect historic properties. The purpose of the consultation is to identify historic properties affected by the project; assess the activity’s adverse effects on the historic properties; and seek ways to avoid, minimize, or mitigate any of those effects. Specifically, the consultation is to determine and document a proposed action’s area of potential effects; assess whether the proposed project would alter, directly or indirectly, certain characteristics of the historic property; and develop and evaluate alternatives or modifications to the proposed project or activity that could avoid, minimize, or mitigate adverse effects. The entire process, including resolution of any adverse effects, must be completed before the relevant land management agency can issue a permit or grant permission to proceed with the proposed activity. Wilderness Act of 1964. The Wilderness Act of 1964 provides for federal lands to be designated as “wilderness areas,” which means that such lands are to be administered in such a manner that will leave them unimpaired for future use and enjoyment and to provide for their protection and the preservation of their wilderness character, among other goals. If Border Patrol proposes to patrol or install surveillance equipment on federal land that has been designated as wilderness, the agency must comply with the requirements and restrictions of the Wilderness Act of 1964, other laws establishing a particular wilderness area, and the relevant federal land management agency’s regulations governing wilderness areas. Section 4 of the act prohibits the construction of temporary roads or structures, as well as the use of motor vehicles, motorized equipment, and other forms of mechanical transport in wilderness areas, unless such construction or use is necessary to meet the minimum requirements for administration of the area, including for emergencies involving health and safety. Generally, the land management agencies have regulations that address the emergency and administrative use of motorized equipment and installations in the wilderness areas they manage. For example, under Fish and Wildlife Service regulations, the agency may authorize Border Patrol to use a wilderness area and prescribe conditions under which motorized equipment, structures, and installations may be used to protect the wilderness, including emergencies involving damage to property and violations of laws. Forest Service regulations are similar to Fish and Wildlife Service regulations but allow the agency to prescribe conditions to protect the wilderness and its resources. including in emergencies involving damage to property. Under Bureau of Land Management regulations, the agency may authorize Border Patrol to occupy and use wilderness areas to carry out the purposes of federal laws as well as prescribe conditions for Border Patrol’s use to protect the wilderness area, its resources, and users. Endangered Species Act of 1973. The purpose of the Endangered Species Act is to conserve threatened and endangered species and the ecosystems upon which they depend. Under section 7 of the act, if Border Patrol or the land management agencies determine that an activity Border Patrol intends to authorize, fund, or carry out may affect an animal or plant species listed as threatened or endangered, the agency may initiate either an informal or a formal consultation with the Fish and Wildlife Service—which we refer to as a section 7 consultation—to ensure that its actions do not jeopardize the continued existence of such species or result in the destruction or adverse modification of its critical habitat. The agencies are to initiate informal consultation if they determine that an activity may affect—but is not likely to adversely affect—a listed species or critical habitat. If the Fish and Wildlife Service agrees, typically by issuing a letter of concurrence with Border Patrol or the land management agency’s determination, then Border Patrol may proceed with the activity without further consultation. If Border Patrol or the land management agency determines that an activity is likely to adversely affect a species, formal consultation must be initiated, which involves submitting to the Fish and Wildlife Service a written request that includes a description of the proposed action and how it may affect threatened or endangered species and their critical habitat. The consultation usually ends with the issuance of a biological opinion by the Fish and Wildlife Service, and the opinion can contain provisions affecting Border Patrol activities. To help implement these key federal land management laws, Border Patrol and the land management agencies have developed several mechanisms to coordinate their responsibilities, including a national-level memorandum of understanding and local agreements. The national-level memorandum of understanding was signed in 2006 by the Secretaries of Homeland Security, the Interior, and Agriculture and is intended to provide consistent principles to guide the agencies’ activities on federal lands along the U.S. borders. Such activities may include placing and installing surveillance equipment, such as towers and underground sensors; using roads; providing Border Patrol with natural and cultural resource training; mitigating environmental impacts; and pursuing suspected undocumented aliens off road in wilderness areas. The memorandum also contains several provisions for resolving conflicts between Border Patrol and land managers, such as directing the agencies to resolve conflicts at and delegate resolution authority to the lowest field operations level possible and to cooperate with each other to complete—in an expedited manner— all compliance that is required by applicable federal laws. Some Border Patrol stations and land management agencies have coordinated their responsibilities through use of the national-level memorandum of understanding. For example, Border Patrol and land managers in Arizona used the 2006 memorandum of understanding to set the terms for reporting Border Patrol off-road vehicle incursions in Organ Pipe Cactus National Monument, as well as for developing strategies for interdicting undocumented aliens closer to the border in the Cabeza Prieta National Wildlife Refuge and facilitating Border Patrol access in the San Bernardino National Wildlife Refuge. Border Patrol and land management agencies have also coordinated their responsibilities through local agreements that were facilitated by the 2006 memorandum of understanding, which provides guidance on the development of individual local agreements. For example, for the Coronado National Forest in Arizona, Border Patrol and the Forest Service developed a coordinated strategic plan that sets forth conditions for improving and maintaining roads and locating helicopter landing zones in wilderness areas, among other issues. Regarding road maintenance, the plan states that sufficient funding has not been available for the Forest Service to perform road maintenance on many of the roads needed by Border Patrol for patrol and surveillance operations. It therefore sets forth the conditions for Border Patrol to use its own funding to pay for or perform road maintenance on the forest. Another example of a local agreement that resulted from the national-level 2006 memorandum of understanding is one between the Bureau of Land Management’s Las Cruces office and Border Patrol in New Mexico, concerning the maintenance of unpaved Bureau of Land Management roads. Specifically, in 2007, the agencies agreed in writing that the Bureau of Land Management is to promptly review Border Patrol road maintenance requests and expeditiously conduct necessary analysis of proposed requests, such as environmental and historic property assessments under the National Environmental Policy and National Historic Preservation acts. In addition, Border Patrol agreed to limit road maintenance so that it does not change the existing road profile or include new construction of drainage structures. Border Patrol and land managers have also used other mechanisms to coordinate their responsibilities, such as local agreements predating the 2006 memorandum of understanding, as well as a 2000 legal settlement requiring a section 7 consultation and an environmental impact statement resulting in measures that now govern Border Patrol’s activities in a certain area. For example, in California, officials in the Bureau of Land Management’s El Centro office sought input from officials in Border Patrol’s El Centro Sector in deciding which Bureau of Land Management roads to close as part of a comprehensive road designation and mapping project. In obtaining Border Patrol’s input, the Bureau of Land Management decided to keep open numerous roads that it had otherwise been planning to close. Border Patrol El Centro Sector officials told us they appreciated this local coordination, which allowed them the access they needed while helping the Bureau of Land Management balance its requirements for protecting resources and facilitating vehicle access by Border Patrol and the public. In addition, in 2000, Border Patrol settled a lawsuit alleging that its Operation Rio Grande in south Texas violated the National Environmental Policy Act and the Endangered Species Act. The settlement prohibited Border Patrol, on an interim basis, from mowing brush in the floodplain of the Rio Grande and clearing, burning, or driving through any brush or other vegetation in the floodplain, with some exceptions, and using lights at night to illuminate portions of the Lower Rio Grande National Wildlife Refuge property, among other terms. The legal settlement also required Border Patrol to conduct section 7 consultations and prepare an environmental impact statement, which resulted in measures that now govern Border Patrol’s activities in and around the Fish and Wildlife Service’s South Texas Refuge Complex. Several other mechanisms as well have been used to facilitate interagency coordination. For example, Border Patrol and Interior established interagency liaisons, who have responsibility for facilitating coordination among their agencies. Border Patrol’s Public Lands Liaison Agent program directs each Border Patrol sector to designate an agent dedicated to interacting with Interior, Agriculture, or other governmental or nongovernmental organizations involved in land management issues. The role of these designated agents is to foster better communication; increase interagency understanding of respective missions, objectives, and priorities; and serve as a central point of contact in resolving issues and concerns. Key responsibilities of these public lands liaison agents include implementing the 2006 memorandum of understanding and subsequent related agreements, and monitoring any enforcement operations, issues, or activities related to federal land use or resource management. In addition, Interior established its own Southwest Border Coordinator, located at the Border Patrol Tucson Sector, to coordinate federal land management issues among Interior component agencies and with Border Patrol. The Forest Service also established a dedicated liaison position in the Tucson Sector to coordinate with Border Patrol, according to Forest Service officials. In addition to these liaison positions, a borderlands management task force provides an intergovernmental forum in the field for governmental officials, including those from Border Patrol, the land management agencies, and other state and local government entities, to regularly meet and discuss challenges and opportunities for working together. The task force acts as a mechanism to address issues of security, safety, and resources among federal, tribal, state, and local governments located along the border. Border Patrol stations’ access has been limited on some federal lands along the southwestern border because of certain land management laws, according to some patrol agents-in-charge in the borderlands region. Specifically, 17 of the 26 stations that have primary responsibility for patrolling federal lands along the southwestern border reported that when they attempt to obtain a permit or permission to access portions of federal lands, delays and restrictions have resulted from complying with land management laws. Despite these delays and restrictions, 22 of the 26 Border Patrol stations reported that the border security status of their area of operation has not been affected by land management laws. Patrol agents-in-charge of 17 of 26 stations along the southwestern border reported that they have experienced delays and restrictions in patrolling and monitoring portions of federal lands because of various land management laws. Specifically, patrol agents-in-charge of 14 of the 17 stations reported that they have been unable to obtain a permit or permission to access certain areas in a timely manner because of how long it takes for land managers to comply with the National Environmental Policy Act and the National Historic Preservation Act. In addition, 3 of the 17 stations reported that their agents’ ability to access portions of federal lands has been affected by Wilderness Act restrictions on the creation of additional roads and installation of structures, such as SBInet towers. Furthermore, 5 of the 17 stations reported that as a result of consultations under section 7 of the Endangered Species Act, their agents had to change the timing or specific location of ground and air patrols because endangered species were present in these areas. Fourteen of the 26 Border Patrol stations along the southwestern border have reported experiencing delays in getting a permit or permission from land managers to gain access to portions of federal land because of the time it took land managers to complete the requirements of the National Environmental Policy Act and the National Historic Preservation Act. These delays in gaining access have generally lessened agents’ ability to detect undocumented aliens in some areas, according to the patrol agents- in-charge. The 2006 memorandum of understanding directs the agencies to cooperate with each other to complete, in an expedited manner, all compliance required by applicable federal laws, but such cooperation has not always occurred, as shown in the following examples: Federal lands in Arizona. For the Border Patrol station responsible for patrolling certain federal lands in Arizona, the patrol agent-in-charge reported that it has routinely taken several months to obtain permission from land managers to move mobile surveillance systems. The agent-in- charge said that before permission can be granted, land managers generally must complete environmental and historic property assessments—as required by the National Environmental Policy and National Historic Preservation acts—on roads and sites needed for moving and locating such systems. For example, Border Patrol requested permission to move a mobile surveillance system to a certain area, but by the time permission was granted—more than 4 months after the initial request—illegal traffic had shifted to other areas. As a result, Border Patrol was unable to move the surveillance system to the locale it desired, and during the 4-month delay, agents were limited in their ability to detect undocumented aliens within a 7-mile range that could have been covered by the system. The land manager for the federal land unit said that most of the area and routes through it have not had a historic property assessment, so when Border Patrol asks for approval to move equipment, such assessments must often be performed. Moreover, the federal land management unit has limited staff with numerous other duties. For example, the unit has few survey specialists who are qualified to perform environmental and historic property assessments. Thus, he explained, resources cannot always be allocated to meet Border Patrol requests in an expedited manner. Federal lands in New Mexico. In southwestern New Mexico, the patrol agents-in-charge of four Border Patrol stations reported that it may take 6 months or more to obtain permission from land managers to maintain and improve roads that Border Patrol needs on federal lands to conduct patrols and move surveillance equipment. According to one of these agents-in-charge, for Border Patrol to obtain such permission from land managers, the land managers must ensure that environmental and historic property assessments are completed, which typically entails coordinating with three different land management specialists: a realty specialist to locate the site, a biologist to determine if there are any species concerns, and an archaeologist to determine if there are any historic sites. Coordinating schedules among these experts often takes a long time, according to a Border Patrol public-lands liaison. For example, one patrol agent-in-charge told us that a road in his jurisdiction needed to be improved to allow a truck to move an underground sensor, but the process for the federal land management agency to perform a historic property assessment and issue a permit for the road improvements took nearly 8 months. During this period, agents could not patrol in vehicles or use surveillance equipment to monitor an area that illegal aliens were known to use. The patrol agent-in-charge told us that performing such assessments on every road that might be used by Border Patrol would take substantial time and require assessing hundreds of miles of roads. According to federal land managers in the area, environmental and historic property specialists try to expedite support for Border Patrol as much as possible, but these specialists have other work they are committed to as well. Moreover, the office has not been provided any additional funding to increase personnel to be able to dedicate anyone in support of the Border Patrol to expedite such requests. Federal lands in California. For two Border Patrol stations responsible for patrolling federal lands in Southern California, the patrol agents-in- charge reported that when they request permission for road maintenance activities, it can take up to 9 months for permission to be granted; occasionally, Border Patrol may not receive permission at all. In one case, for example, a patrol agent-in-charge told us that better maintenance was needed for five roads and two surveillance system sites within her station’s area of operation, but because permission to maintain these roads was not granted, her agents could not conduct routine patrols or reach the sites for mobile surveillance systems, even in areas of high illegal traffic (see fig. 7). The patrol agent-in-charge said that without the permission to maintain the poor roads, her agents had to find alternative patrol routes and try to apprehend suspected undocumented aliens farther north. In addition, because the proposed surveillance sites could not be used, agents had to place the mobile surveillance systems in areas less prone to illegal traffic. The Bureau of Land Management state program manager for this area told us that one bureau employee had, at times, told Border Patrol agents that they could not use or have permission to maintain a road, whereas the employee should have instructed Border Patrol to seek permission from a Bureau of Land Management specialist, who could have begun the required environmental and historic property assessments. In addition, the state program manager told us that the required assessments for road maintenance activities have not been completed on many routes. He acknowledged that one of the Bureau of Land Management’s biggest challenges is being responsive to Border Patrol timelines. A Bureau of Land Management field manager for this area also told us that the process to approve many Border Patrol projects often takes considerable time because the bureau lacks sufficient staff resources to expedite Border Patrol requests. For some of the stations, the delays patrol agents-in-charge reported could have been shortened if Border Patrol could have used its own resources to pay for, or perform, environmental and historic property assessments required by the National Environmental Policy Act and National Historic Preservation Act, according to patrol agents-in-charge and land managers with whom we spoke. On one land unit, Border Patrol and land managers have developed such a cooperative arrangement and resolved some access delays. Specifically, for the Coronado National Forest, agency officials told us that Border Patrol and the Forest Service had entered into an agreement whereby in some situations Border Patrol pays for road maintenance and the necessary environmental and historic property assessments. While two patrol agents-in-charge reported that in the past they experienced delays in gaining access resulting from poorly maintained roads, they stated that the development of the Coronado National Forest coordinated strategic plan has helped the agencies shorten the time it takes to begin road maintenance because it allows Border Patrol to use its resources and therefore begin environmental and historic property assessments sooner. The plan recognizes that Forest Service funding has not been available to adequately maintain the forest roads that Border Patrol uses for patrols. Officials from both agencies agreed that these roads must be in a drivable condition for Border Patrol agents. Agency officials stated that the agencies have also agreed to allow Border Patrol to fund additional Forest Service personnel to complete requirements for road maintenance and improvement under the National Environmental Policy Act and National Historic Preservation Act. The Coronado National Forest border liaison added that without this agreement, Forest Service would have been unable to meet Border Patrol’s road maintenance needs in a timely fashion. In other situations, using Border Patrol resources to pay for or perform road maintenance may not always expedite access; instead, land managers and Border Patrol officials told us that a programmatic environmental impact statement should be prepared under the National Environmental Policy Act to help expedite access. For example, some patrol agents-in- charge, such as those in southwestern New Mexico, told us that to conduct environmental and historic property assessments on every road that agents might use, on a case-by-case basis, would take substantial time and require assessing hundreds, if not thousands, of miles of roads. Moreover, when agents request permission to move mobile surveillance systems, the request is often for moving such systems to a specific location, such as a 60-by-60-foot area on a hill. Some agents told us, however, that it takes a long time to obtain permission from land managers because environmental and historic property assessments must be performed on each specific site, as well as on the road leading to the site. As we stated earlier, National Environmental Policy Act regulations recognize that programmatic environmental impact statements—broad evaluations of the environmental effects of multiple Border Patrol activities, such as road use and technology installation, in a geographic area—could facilitate compliance with the act. By completing a programmatic environmental impact statement, Border Patrol and land management agencies could then subsequently prepare narrower, site-specific statements or assessments of proposed Border Patrol activities on federal lands, such as on a mobile surveillance system site alone, thus potentially expediting access. Patrol agents-in-charge for three stations reported that agents’ access to some federal lands was limited because of restrictions in the Wilderness Act on building roads and installing infrastructure, such as surveillance towers, in wilderness areas. For these stations, the access restrictions lessen the effectiveness of agents’ patrol and monitoring operations. However, land managers may grant permission for such activities if they meet the regulatory requirements for emergency and administrative use of motorized equipment and installations in wilderness areas. As shown in the following examples, land managers responsible for two wilderness areas are working with Border Patrol agents to provide additional access as allowed by the regulations for emergency and administrative use. On the other hand, a land manager responsible for a third wilderness area has denied some Border Patrol requests for additional access. Cabeza Prieta National Wildlife Refuge, Arizona. At the Cabeza Prieta National Wildlife Refuge, Wilderness Act restrictions have limited the extent to which Border Patrol agents can use vehicles for patrols and technology resources to detect undocumented aliens. The patrol agent- in-charge responsible for patrolling Cabeza Prieta told us that the refuge has few roads. She told us that her agents’ patrol operations would be more effective with one additional east-west road close to the border. Over 8,000 miles of roads and trails created by undocumented aliens and law enforcement activity throughout the refuge’s wilderness have been identified by refuge staff; according to the patrol agent-in-charge, having an additional east-west road would give Border Patrol more options in using its mobile surveillance system to monitor significant portions of the refuge that are susceptible to undocumented-alien traffic. Additionally, the patrol agent-in-charge told us that better access could benefit the natural resources of the refuge because it could lead to more arrests closer to the border—instead of throughout the refuge—and result in fewer Border Patrol off-road incursions. The refuge manager agreed that additional Border Patrol access may result in additional environmental protection. He told us that he is working with Border Patrol to develop a strategy at the refuge that would allow Border Patrol to detect and apprehend undocumented aliens closer to the border. Further, the refuge manager in February 2010 gave permission for Border Patrol to install an SBInet tower on the refuge, which may also help protect the wilderness area. Coronado National Forest, Arizona. In parts of the Coronado National Forest, Wilderness Act restrictions also limit the extent to which Border Patrol agents at one station can use vehicles to patrol parts of the forest and detect undocumented aliens. Specifically, patrol agents-in-charge of one station told us that their agents’ access to part of the wilderness area has been limited—in large part because of the rugged terrain, but also because of restrictions on creating new roads in wilderness areas. According to Tucson Sector Border Patrol officials, more undocumented aliens cross the Coronado National Forest than any other federal land unit along the southwestern border, and much of this illegal traffic has recently shifted to a particular area of wilderness. Coronado National Forest officials told us they recognized the need for greater Border Patrol access and that such access could also help protect the forest’s natural resources. As a result, according to Coronado National Forest officials, they approved the creation of four helicopter landing zones in the wilderness area because Forest Service wilderness regulations allow the agency to prescribe conditions for Border Patrol’s use of motorized equipment and installations to protect the wilderness and its resources. Construction of these landing zones, however, has been delayed until 2011, according to Coronado National Forest officials. In addition, Forest Service permitted Border Patrol to install technological resources—such as remote video surveillance systems and ground-based radar—in the rough terrain where road creation is infeasible, such as in the wilderness area. According to an agreement between Border Patrol and Coronado National Forest officials, installing this technology helps Border Patrol agents detect undocumented aliens and allows agents time to respond by helicopter, horseback, or all-terrain vehicle to apprehend undocumented aliens in these areas. Organ Pipe Cactus National Monument, Arizona. Contrasting with the Cabeza Prieta refuge and the Coronado National Forest, when Border Patrol requested additional access in Organ Pipe’s wilderness area, the monument’s land manager determined that additional Border Patrol access would not necessarily improve protection of natural resources. For the Border Patrol station responsible for patrolling Organ Pipe, the patrol agent-in-charge told us that certain Border Patrol activities have been restricted because of the monument’s status as wilderness, and Border Patrol’s requests for additional access have been denied. Specifically, Border Patrol proposed placing an SBInet tower within the monument, and from the proposed site, the tower was expected to enable Border Patrol to detect undocumented aliens in a 30-square-mile range. But because the proposed site was in a designated wilderness area, the land manager denied Border Patrol’s request. Instead, Border Patrol installed the tower in an area within the monument that is owned by the state of Arizona. At this site, however, the tower has a smaller surveillance range and cannot cover about 3 miles where undocumented aliens are known to cross, according to the patrol agent-in-charge, thus lessening Border Patrol’s ability to detect entries compared with the originally proposed site. In addition, the patrol agent-in-charge explained that because of the tower’s placement, when undocumented aliens are detected, agents have less time to apprehend them before they reach mountain passes, where it is easier to avoid detection. According to the land manager, he requested that Border Patrol find a different location for the tower because the Wilderness Act restricts placement of such infrastructure in wilderness areas. Further, he explained that Border Patrol did not demonstrate to him that the proposed tower site was critical, as compared with the alternative, and that agents’ ability to detect undocumented aliens would be negatively affected. Five Border Patrol stations reported that as a result of consultations required by section 7 of the Endangered Species Act, agents have had to adjust the timing or specific locales of their ground and air patrols to minimize the patrols’ impact on endangered species and their critical habitats. As shown in the following examples, although some delays and restrictions have occurred, Border Patrol agents were generally able to adjust their patrols with little loss of effectiveness in their patrol operations. Coronado National Forest, Arizona. For a Border Patrol station responsible for patrolling an area within the Coronado National Forest, the patrol agent-in-charge reported that a section 7 consultation placed restrictions on helicopter and vehicle access because of the presence of endangered species. First, during parts of the year when certain endangered species are in residence, helicopter flight paths have been restricted. Nevertheless, the agent-in-charge told us, the restrictions, which result in alternative flight paths, do not lessen the effectiveness of Border Patrol’s air operations. Moreover, according to the Forest Service District Ranger, since the area’s rugged terrain presents a constant threat to agents’ safety, Border Patrol agents have been allowed to use helicopters as needed, regardless of endangered species’ presence. Second, the agent-in-charge told us, Border Patrol wanted to improve a road within the area to provide better access, but because of the proposed project’s adverse effects an endangered plant, road improvement could not be completed near a low point where water crossed the road. Border Patrol worked with Forest Service officials to improve 3 miles of a Forest Service road up to the low point, but the crossing itself—about 8 feet wide—along with 1.2 miles of road east of it was not improved. According to the agent-in-charge, agents still patrol the area but must drive vehicles slowly because of the road’s condition east of the low point. Cabeza Prieta National Wildlife Refuge, Arizona. The patrol agent-in- charge of the station responsible for patrolling the Cabeza Prieta National Wildlife Refuge told us that as a result of section 7 consultations, her helicopter patrols have been restricted when certain endangered species are known to be in an area. Once she hears from refuge staff about the endangered species’ location, her agents adjust their air operations to patrol and pursue undocumented aliens farther north in the refuge. She told us that her agents’ ability to detect and apprehend suspected undocumented aliens has not been compromised by these adjustments. Instead, she explained, communication with the refuge manager about the location of the endangered species is all that has been needed. According to the refuge manager, refuge staff are currently developing a system that will provide Border Patrol with “real- time” information on the endangered species’ location, which they plan to complete before the end of the year. San Bernardino National Wildlife Refuge, Arizona. For the Border Patrol station responsible for patrolling the San Bernardino National Wildlife Refuge, the patrol agent-in-charge told us that vehicle access has been restricted in the refuge because vehicle use can threaten the habitat of threatened and endangered species. Since establishment of the refuge in 1982, locked gates have been in place on the refuge’s administrative roads (see fig. 8). But Border Patrol station officials told us that in the last several years, with the increase in the number of agents assigned to the station, they wanted to have vehicle access to the refuge. The terms for vehicle access had to be negotiated with the refuge manager because of the access restrictions imposed to protect endangered species habitat. The patrol agent-in-charge told us that Border Patrol and the refuge manager agreed to place Border Patrol locks on refuge gates and to allow second-level Border Patrol supervisors, on a case-by-case basis, to determine whether vehicle access to the refuge is critical. If such a determination is made, a Border Patrol supervisor unlocks the gate and contacts refuge staff to inform them that access was granted through a specific gate. The patrol agent-in-charge told us that operational control has not been affected by these conditions for vehicle access. Nevertheless, he said, additional technology, such as mobile surveillance systems, would be helpful in detecting undocumented aliens in the remote areas in and around the refuge. Despite the access delays and restrictions reported for 17 stations, most patrol agents-in-charge whom we interviewed said that the border security status of their jurisdictions has been unaffected by land management laws. Instead, factors other than access delays or restrictions, such as the remoteness and ruggedness of the terrain or dense vegetation, have had the greatest effect on their abilities to achieve or maintain operational control. While four patrol agents-in-charge reported that delays and restrictions negatively affected their ability to achieve or maintain operational control, they have either not requested resources to facilitate increased or timelier access or have had their requests denied by senior Border Patrol officials, who said that other needs were greater priorities for the station or sector. Patrol agents-in-charge at 22 of the 26 stations along the southwestern border told us that their ability to achieve or maintain operational control in their areas of responsibility has been unaffected by land management laws; in other words, no portions of these stations’ jurisdictions have had their border security status, such as “controlled,” “managed,” or “monitored,” downgraded as a result of land management laws. Instead, for these stations, the primary factor affecting operational control has been the remoteness and ruggedness of the terrain or the dense vegetation their agents patrol and monitor. Specifically, patrol agents-in-charge at 18 stations told us that stark terrain features—such as rocky mountains, deep canyons, and dense brush—have negatively affected their agents’ abilities to detect and apprehend undocumented aliens. A patrol agent-in-charge whose station is responsible for patrolling federal land in southern California told us that the terrain is so rugged that Border Patrol agents must patrol and pursue undocumented aliens on foot; even all-terrain vehicles specifically designed for off-road travel cannot traverse the rocky terrain. He added that because of significant variations in topography, such as deep canyons and mountain ridges, surveillance technology can also be ineffective in detecting undocumented aliens who hide there (see fig. 9). In addition, patrol agents-in-charge responsible for patrolling certain Fish and Wildlife Service land reported that dense vegetation limits agents’ ability to patrol or monitor much of the land. One agent explained that Border Patrol’s technology resources were developed for use in deserts where few terrain features obstruct surveillance, whereas the vegetation in these areas is dense and junglelike (see fig. 10). Most patrol agents-in-charge also told us that the most important resources for achieving and maintaining operational control are (1) a sufficient number of agents; (2) additional technology resources, such as mobile surveillance systems; and (3) tactical infrastructure, such as vehicle and pedestrian fencing. For example, in the remote areas of one national wildlife refuge, a patrol agent-in-charge told us that even with greater access in the refuge, he would not increase the number of agents patrolling it to gain improvements in operational control. Instead, he said, additional technology resources, such as a mobile surveillance system, would be more effective in achieving operational control of the area because such systems would assist in detecting undocumented aliens while allowing agents to maintain their presence in and around a nearby urban area, where the vast majority of illegal entries occur. His view, and those of other patrol agents- in-charge whom we interviewed, is underscored by Border Patrol’s operational assessments—twice-yearly planning documents that stations and sectors use to identify impediments to achieving or maintaining operational control and to request resources needed to achieve or maintain operational control. In these assessments, stations have generally requested additional personnel or technology resources for their operations on federal lands. Delays or restrictions in gaining access have generally not been identified in operational assessments as an impediment to achieving or maintaining operational control for the 26 stations along the southwestern border. Of the 26 patrol agents-in-charge we interviewed, 4 reported that delays and restrictions in gaining access to federal lands have negatively affected their ability to achieve or maintain operational control: 2 of these 4 agents reported not having used Border Patrol’s operational assessments to request resources to facilitate increased or timelier access, and the other 2 reported having had such requests denied by either Border Patrol sector or headquarters officials. For example, the patrol agent-in-charge responsible for an area in southwestern New Mexico told us that operational control in a remote area of his jurisdiction is partly affected by the scarcity of roads. Specifically, having an additional road in this area would allow his agents to move surveillance equipment to an area that, at present, is rarely monitored. Nevertheless, a supervisory agent for the area told us, station officials did not request additional access through Border Patrol’s operational assessments. The 2006 memorandum of understanding directs Border Patrol to consult with land managers when developing operational assessments if Border Patrol needs additional access on federal lands. Land managers in this area told us they would be willing to work with Border Patrol to facilitate such access, if requested. Similarly, the patrol agent-in-charge at a Border Patrol station responsible for patrolling another federal land unit also reported that his ability to achieve operational control is affected by a shortage of east-west roads in the unit. He told us that some of his area of operation could potentially reach operational control status if there were an additional east-west road for patrolling certain areas within the unit to detect and apprehend undocumented aliens. Border Patrol requested an additional east-west road, but the land manager denied the request because the area is designated wilderness, according to the agent-in-charge. The agent explained that he did not use the operational assessment to request additional roads because the land manager denied his initial request. The land manager told us that he would be willing to work with Border Patrol to facilitate additional access if it could be shown that such access would help increase deterrence and apprehensions closer to the border. For the other two stations reporting that federal land management laws have negatively affected their ability to achieve or maintain operational control, Border Patrol sector or headquarters officials have denied the stations’ requests for resources to facilitate increased or timelier access— typically for budgetary reasons. For example, one patrol agent-in-charge reported that 1.3 miles of border in her area of responsibility are not at operational control because, unlike most other border areas, it has no access road directly on the border. Further, she explained, the rough terrain has kept Border Patrol from building such a road; instead, a road would need to be created in an area designated as wilderness. According to the patrol agent-in-charge, her station asked Border Patrol’s sector office for an access road, and the request was submitted as part of the operational requirements-based budgeting program. As of July 2010, the request had not been approved because of budgetary constraints, according to the agent-in-charge. In addition, another patrol agent-in-charge told us, few roads lie close to the river that runs through his area of responsibility. As a result, his agents have to patrol and monitor nearly 1 mile north of the international border, much closer to urban areas. According to officials with Border Patrol’s relevant sector office, they have been using the operational assessments for several years to request an all-weather road, but approval and funding have not been granted by Border Patrol’s headquarters. While federal land managers along the southwestern border receive data collected by Border Patrol on the extent of cross-border illegal activities on their lands, the extent of land managers’ data collection efforts on the effects of these illegal activities has varied among land units, with some land managers regularly monitoring areas to determine resource impacts, others documenting environmental damage on an ad hoc basis, and still others collecting no such data. Where collected, land managers have used data on the environmental effects of cross-border illegal activity, as well as data provided by Border Patrol on the extent of cross-border illegal activity, for several land management and conservation purposes. These purposes include (1) restoring lands and mitigating environmental damage, (2) providing Border Patrol agents with environmental and cultural awareness training, (3) protecting staff and visitors, and (4) establishing conservation measures to reduce adverse effects of Border Patrol actions on endangered species and their habitats. Land managers generally rely on Border Patrol for data on cross-border illegal activity, including data on apprehensions of undocumented aliens and drug seizures occurring on federal lands. In accordance with the 2006 memorandum of understanding, Border Patrol officials share data with land managers, and officials have done so in a variety of ways, including at regular meetings and in e-mailed reports. For example, Border Patrol provides statistics on apprehensions and drug seizures to land managers during the monthly meetings of borderlands management task forces. Formed in each Border Patrol sector along the southwestern border, these task forces serve as a forum for Border Patrol and land managers, among others, to discuss and share information on border-related issues on public lands. During these meetings, Border Patrol has typically provided written statistics on cross-border illegal activity occurring on federal land units throughout each sector. The extent of land managers’ efforts to collect data on the environmental effects of cross-border illegal activity along the southwestern border has varied, with some land managers (5 of 18) regularly collecting and analyzing data on the environmental effects of cross-border illegal activity, including acres burned by wildland fires, miles of trampled vegetation from illegal trails, and amounts of trash collected. Other land managers (10 of 18) reported having collected data on an irregular basis. Still other land managers (3 of 18) reported having collected no such data. Examples of ongoing efforts by land managers to collect and analyze these kinds of data include the following: At Organ Pipe Cactus National Monument, land managers have conducted a semiannual inventory and monitoring program since 2002 to assess the extent of natural and cultural resource damage from cross- border illegal activity. The land managers delineate and walk five east- west lines, or transects, that cross known illegal trafficking routes, and along each transect, monument staff have recorded and mapped resource impacts, such as trails, trash, and fire scars. Land managers from the Cleveland National Forest in California have annually collected and reported a variety of data on environmental impacts, which show that since 2002, nearly 59,000 pounds of trash left by undocumented aliens have been collected, and over 19,000 acres of forest have burned from fires started by undocumented aliens. The Bureau of Land Management, through its restoration work on federal lands throughout southern Arizona, has annually collected data since 2003 on the quantities of trash, vehicles, and bicycles removed from public land and acres of land restored. Land managers from the Cabeza Prieta National Wildlife Refuge have collected data annually since 2005 on illegal trails, damaged vegetation, and sites with large amounts of trash. They collect these data along 12 transects established by refuge staff, which are traveled on foot by volunteers and refuge staff who record information on environmental impacts. Cabeza Prieta has also inventoried the damage caused by foot and vehicle traffic, mapped smuggling routes through the refuge, and assessed priorities for restoration. Other land managers’ data collection has been done with less regularity. For example, land managers from the Fish and Wildlife Service’s South Texas Refuge Complex—which includes the Laguna Atascosa, Santa Ana, and Lower Rio Grande Valley national wildlife refuges—told us that although they do not regularly collect data on the environmental impacts of cross-border illegal activity, their staff has estimated that thousands of illegal trails and tons of trash and human waste have been found on the three wildlife refuges within the complex. In addition, at the Coronado National Memorial in Arizona, land managers have at times mapped the major trails used by undocumented aliens through the monument, taken aerial and satellite photos to document damage, and documented disturbances to the foraging habitat of the endangered lesser long-nosed bat. Three land managers we spoke with had not made any formal effort to collect data on the environmental effects of cross-border illegal activity, although they believed that adverse environmental effects were occurring. A land manager with the Bureau of Land Management’s Las Cruces office in New Mexico said that his office had requested funding to collect data on the environmental effects of increased human presence on bureau lands— including inventorying and documenting the extent of illegal trails, trash, and impacts to animal species—but had received no funding to carry out these data collection efforts. In addition to collecting data on the environmental impacts of cross-border illegal activity, land managers in some areas have also collected data on the environmental effects of Border Patrol’s response to cross-border illegal activities. For example, land managers for Organ Pipe Cactus National Monument and Cabeza Prieta National Wildlife Refuge have created maps showing the extent of off-road vehicle travel by Border Patrol agents. Such travel can disrupt endangered species and damage vegetation, soils, and water runoff patterns, according to these land managers. Land managers use data they have collected on the environmental effects of cross-border illegal activity, as well as data provided by Border Patrol on the extent of cross-border illegal activity, for several purposes, including (1) restoring lands and mitigating environmental damage, (2) providing Border Patrol agents with environmental and cultural awareness training, (3) protecting staff and visitors, and (4) establishing conservation measures to reduce adverse effects of Border Patrol actions on endangered species and their habitats. Some land managers have used environmental data and data on cross- border illegal activity to help restore lands damaged by undocumented aliens. For example, since 2003, the Bureau of Land Management has been working with federal, state, and tribal partners to administer the Southern Arizona Project. Through this project, partners have coordinated and executed cleanup and restoration activities throughout southern Arizona. In fiscal year 2009, for example, participants in the Southern Arizona Project removed 468,000 pounds of trash, 62 vehicles, and 404 bicycles and restored 650 acres of land that were damaged by illegal traffic (see fig. 11). The Bureau of Land Management reported that the project focused its remediation effort on restoring illegally created roads and trails, which included grading the disturbed sites, removing invasive brush, and reseeding areas with native plants. Land managers with Interior have also used selected data to identify and select natural resource projects to offset the environmental impacts of constructing pedestrian and vehicle fences. The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 mandated installation of additional physical barriers and roads near the border, including 14 miles of additional fencing near San Diego, California. The act waived the provisions of the Endangered Species Act and the National Environmental Policy Act to the extent that the U.S. Attorney General determined necessary to ensure expeditious construction of barriers and roads. The REAL ID Act of 2005 amended the 1996 act to authorize the Secretary of Homeland Security to waive all legal requirements that the Secretary, at his or her sole discretion, determines necessary to ensure expeditious construction. In 2007, the act was amended again to require, among other things, that the Secretary (1) construct not less than 700 miles of fencing along the southwestern border where such fencing would be most practical and effective and (2) consult widely, including with the Secretaries of the Interior and Agriculture, to minimize the impact of the fencing on the environment, among other things. In instances where the Secretary invoked this waiver authority, DHS voluntarily prepared plans—termed environmental stewardship plans—estimating the expected environmental impacts of particular fencing segments and worked with Interior to develop strategies to reduce or minimize adverse environmental impacts. Where adverse environmental impacts such as habitat loss, heavy sedimentation, or erosion could not be minimized or averted, DHS committed funding to allow Interior to carry out appropriate mitigation measures (see fig. 12). Using the environmental stewardship plans to identify appropriate mitigation measures, DHS committed up to $50 million to Interior for implementing such measures. Interior in turn was to identify $50 million worth of projects to benefit threatened and endangered species and their habitats. Projects identified by Interior include acquiring land for the endangered Otay Mountain arroyo toad in California and implementing jaguar monitoring and conservation projects across Arizona and New Mexico (see app. II for the complete list of mitigation projects). According to Interior and DHS officials, Interior and DHS signed an agreement on September 28, 2010, for the transfer of $6.8 million to mitigate impacts on endangered species along the southwestern border. This agreement is the first of several anticipated over the next year to transfer funds totaling $50 million from DHS to Interior for such mitigation projects, according to an Interior official. Some land managers told us they have used information on the environmental effects of cross-border illegal activity to design and provide training to Border Patrol agents on ways to minimize environmental damage that their response to illegal activities may cause, in accordance with the 2006 memorandum of understanding.Twenty of the 26 patrol agents-in-charge we interviewed told us that their agents received training from land managers in the form of either in-person training, training tools such as videos, or both. All 20 patrol agents-in-charge reported that the training provided by land managers had increased their agents’ awareness of the potential resource effects of their patrol operations and some said that this increased awareness has led agents to modify their patrols. For example, 10 patrol agents in charge said that their agents’ increased environmental awareness had, for example, helped reduce off-road driving in environmentally sensitive areas and that, when possible, agents were more likely to use foot or horse patrols instead of vehicle patrols. Nevertheless, many patrol agents-in-charge reported wanting more frequent, land unit-specific, in-person training for their agents. For example, 11 patrol agents-in-charge reported wanting more frequent training, including regular refresher training, and suggested frequencies for this training that ranged from quarterly to annually. Further, 10 patrol agents-in-charge reported that having information delivered by land managers was the clearest, most effective way to communicate with agents. Three patrol agents-in-charge also said they would like training to be area-specific, meaning that the training should describe the specific natural and cultural resources of the area they patrol. Land managers and other officials told us that limited resources and competing priorities, combined with the high rate of turnover among Border Patrol agents, can make it difficult to provide timely, in-person training on a regular basis. Recognizing the need for natural and cultural resource training for Border Patrol agents, DHS, Interior, and the Forest Service in 2009 formed a task force on environmental and cultural stewardship training. Officials of these agencies told us that the task force is developing a content outline for a national training module and has collected nationwide information on training that land managers have provided to Border Patrol stations, discussed requirements for the national module, and discussed an overall strategy for implementing the module. As of September 2010, the task force had not made any decisions on what information the training module is to include and had not asked staff in the field what their needs for training content were, according to DHS and Interior officials involved in developing the training. But as we have previously reported, stakeholder involvement throughout the planning and development of such a training program contributes to accomplishing the agencies’ missions and goals. Adopting core characteristics of a strategic training and development process can also help ensure that agencies’ training investments are targeted strategically and not directed toward efforts that are irrelevant, duplicative, or ineffective. Some land managers have also used data provided by Border Patrol on cross-border illegal activity to help make decisions related to staff and visitor safety. For example, managers of some federal lands have placed signs warning the public that they may encounter cross-border illegal activity, or they have distributed border safety awareness flyers at visitor centers and trailheads (see figs. 13 and 14). In some cases, federal land managers have closed portions of their lands to the public and restricted staff access to certain areas unless accompanied by law enforcement agents. As illustrated by the following examples, Interior and the Forest Service have faced numerous challenges providing a safe environment for visitors, employees, and residents on federal lands along the southwestern border: In 2002 at Organ Pipe Cactus National Monument, a drug smuggler shot and killed a park ranger. Following this and other reports of increasing violence, about half of the monument has been closed to the public since 2007. In 2005, five undocumented aliens were murdered at Buenos Aires National Wildlife Refuge in Arizona. As the result of illegal activity and heavy law enforcement action, about 3,500 acres have been closed to the public since 2006. In a 2006 testimony, the supervisor of Cleveland National Forest stated that armed bandits had threatened, robbed, raped, and assaulted undocumented aliens traveling through the forest and that money, firearms, and other personal possessions had been taken from national forest employee and private residences. Since 2007, Cabeza Prieta National Wildlife Refuge has been requiring law enforcement escorts for refuge staff and volunteers working within several miles of the border. In 2009, the South Texas Refuge Complex reported that many refuge tracts adjacent to the Rio Grande were closed to visitors in part because of illegal immigration, human smuggling, and drug smuggling. In addition, the Fish and Wildlife Service reported in a 2007 internal document that it had not done enough to inform the public and key political officials about the dangers presented by cross-border smuggling activities. Illustrating this shortcoming, Fish and Wildlife Service South Texas Refuge Complex officials told us that refuge staff will tell visitors— when asked—of potential border issues during their visit, but that no standard public notification system exists, such as handouts, signs, or other means. Interior lacks a nation- or borderwide system to analyze trends in illegal activity, according to department headquarters officials. These officials told us, however, that Interior is in the early stages of developing an incident management analysis and reporting system to provide a method for collecting, analyzing, and reporting information on illegal activity from all bureaus. Furthermore, these officials explained that this system is to assist officials in making staff and visitor safety decisions on Interior lands. The Fish and Wildlife Service has also used data related to the environmental impacts of cross-border illegal activity to prepare biological opinions that establish measures to reduce adverse potential effects of Border Patrol actions on endangered species and their critical habitats along the southwestern border. For example, in a 2009 biological opinion, the Fish and Wildlife Service analyzed data on Border Patrol agents’ off- road vehicle use, routine activities at bases of operations, and road dragging, among other activities. They determined that these activities disturbed a certain endangered species and that establishment of a Border Patrol base of operations—including housing, lighting, parking, fuel, and generators for agents stationed at the base—contributed to the disturbance of the species by disrupting its traditional travel route. To mitigate these and other adverse impacts, Border Patrol agreed that no aircraft use, off- road vehicle travel, or other activities would occur within a quarter-mile of areas important for the species, except in emergency situations as defined by the 2006 memorandum of understanding. In south Texas, the Fish and Wildlife Service analyzed data on Border Patrol activities—including portable and permanent lighting, clearing of vegetation for patrol roads, and ports of entry, and patrolling activities along the Rio Grande. The Fish and Wildlife Service determined that these activities have fragmented and reduced the amount of habitat suitable for the endangered ocelot. To minimize impacts to the ocelot and other species, Border Patrol agreed to a variety of measures, including working cooperatively with the Fish and Wildlife Service to identify lighting sites that would use 450-watt bulbs instead of 1,000-watt bulbs and reducing the number of roads through the river corridor to reduce habitat fragmentation. The Fish and Wildlife Service also collected data on the environmental effects that construction, operation, and maintenance of SBInet towers in the Tucson Sector—including the construction and repair of roads and the placement of underground sensors—would have on several threatened and endangered species, including the Chiricahua leopard frog, Mexican spotted owl and its critical habitat, jaguar, lesser long-nosed bat, and Pima pineapple cactus. Land managers collected data on a range of impacts on these species, including habitat disturbance and loss; loss of foraging habitat; disturbance from nighttime lights and noise associated with construction, generators, and helicopter landings; and the potential to introduce nonnative plant species that contribute fuel to wildland fires. To minimize these impacts, Border Patrol has participated in several species’ recovery plans, to close and restore unauthorized roads to help offset the increase in new or improved roads, and to fund monitoring efforts for some species. The steady northward flow of illegal human and narcotics traffic across the nation’s southwestern border shows no sign of stopping, and Border Patrol retains and asserts the ability to pursue undocumented aliens when and how it sees fit. Certain land management laws present some challenges to Border Patrol’s operations on federal lands, limiting to varying degrees the agency’s access to patrol and monitor some areas. With limited access for patrols and monitoring, some illegal entries may go undetected. This challenge can be exacerbated as illegal traffic shifts to areas where Border Patrol has previously not needed, or requested, access. Although mechanisms established in the 2006 memorandum of understanding provide a framework for Border Patrol and the federal land management agencies to resolve access issues, some issues remain unresolved. This lack of resolution remains because land management agencies have not always been able to complete required environmental and historic property assessments in a timely fashion—often because of limited resources or competing priorities—and the agencies have not taken advantage of resources that Border Patrol may have to offer to more quickly initiate these assessments. Moreover, conducting these required assessments on a case-by-case basis and without programmatic environmental impact statements to facilitate compliance with the National Environmental Policy Act may be a missed opportunity to expedite Border Patrol’s access to federal borderlands. Border Patrol agents and land managers agree that Border Patrol’s presence is needed to protect natural and cultural resources on federal lands because, for instance, fewer illegal entries means less human traffic over environmentally sensitive areas. What agents perceive as routine patrol operations, however, can also have a lasting negative effect on the environment. Border Patrol has provided its new agents with some basic environmental training, but such training often is neither recurring nor specific to the land units that agents patrol. Land managers, on the other hand, have the natural and cultural resource expertise to share with agents about the potential environmental effects of their operations. Without more frequent and area-specific environmental and cultural resource training by land managers, Border Patrol agents may lack the awareness to modify their patrols in environmentally sensitive areas. To improve the effectiveness of Border Patrol operations while also protecting cultural and natural resources on federal lands along the southwestern border, we recommend that the Secretaries of Homeland Security, the Interior, and Agriculture take the following two actions: To help expedite Border Patrol’s access to federal lands, the agencies should, when and where appropriate, (a) enter into agreements that provide for Customs and Border Protection to use its own resources to pay for or to conduct the required environmental and historic property assessments and (b) prepare programmatic National Environmental Policy Act documents for Border Patrol activities in areas where additional access may be needed. As DHS, Interior, and the Forest Service continue developing a national training module on environmental and cultural resource stewardship, the agencies should incorporate the input of Border Patrol agents and land managers into the design and development of training content, which may include training that is recurring, area-specific, and provided by land managers. We provided a draft of this report for review and comment to the Departments of Homeland Security, the Interior, and Agriculture. DHS, Interior, and the Forest Service, responding on behalf of Agriculture, agreed with our report’s conclusions and recommendations. DHS’s and the Forest Service’s written comments are reprinted in appendixes III and IV, respectively; Interior provided its comments on October 7, 2010, by e-mail through its liaison to GAO. Interior also provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Homeland Security, the Interior, and Agriculture; and other interested parties. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to (1) describe the key land management laws Border Patrol operates under and how Border Patrol and land management agencies coordinate their responsibilities under these laws, (2) examine how Border Patrol operations are affected by these laws, and (3) identify the extent to which land management agencies collect data related to cross-border illegal activities and associated environmental impacts and how these data are used. To describe the key land management laws Border Patrol operates under and how Border Patrol and land management agencies coordinate their responsibilities under these laws, we examined agency documents describing the laws that apply to Border Patrol operations on federal lands along the southwestern border and documents describing how Border Patrol and land management agencies are to coordinate their responsibilities under these laws. We corroborated our selection of key laws through interviews with Border Patrol, the Department of the Interior, and U.S. Forest Service officials in headquarters and at field units. To determine how Border Patrol and land management agencies coordinate their responsibilities under these laws, we interviewed relevant agency officials; reviewed local agreements, including documentation from local working groups and forums, and documentation related to a legal settlement over Border Patrol activities in a certain area with endangered species; and we reviewed the provisions of the 2006 interagency memorandum of understanding between the Department of Homeland Security (DHS), Interior, and the Department of Agriculture. In our interviews with Border Patrol agents and land managers, we determined how these various coordinating mechanisms have helped the agencies implement their respective legal responsibilities. To examine how Border Patrol’s operations are affected by the laws we identified, we conducted selected site visits to 10 federal land units in Arizona, California, and Texas and to Border Patrol stations responsible for patrolling these units. We selected these units, and the stations responsible for patrolling them, on the basis of geographical diversity, the extent of and impact from cross-border illegal activity, and the type of land management agency. Further, we conducted telephone interviews with land managers for federal land units along the border that we did not visit, including those in New Mexico. In total, we interviewed land managers responsible for 18 federal land units along the southwestern border. Although the information we obtained is not generalizable to all land units, it represents a full spectrum of information available on the extent of and impact from cross- border illegal activity. In addition, we developed and used a structured interview to obtain the views of Border Patrol patrol agents-in-charge of the 26 Border Patrol stations in the borderlands region with primary responsibility for patrolling federal lands along the southwestern border. We surveyed these agents on whether and to what extent their operations have been affected by land management laws. We also analyzed documentation on how Border Patrol measures the effectiveness of its operations and reviewed 2 years (2009 and 2010) of Border Patrol operational assessments. To examine the extent to which land managers collect data related to cross- border illegal activities and associated environmental impacts and how these data are used, we obtained a variety of data from land managers. Specifically, we identified what kinds of data land managers have collected and what kinds of data they have relied on Border Patrol to provide, and we reviewed the varying quantities and types of data that land managers had on the environmental effects of cross-border illegal activities. We also reviewed data that land managers have collected on the environmental effects of Border Patrol’s response to cross-border illegal activities, such as constructing fences and using vehicles off established roads to pursue suspected undocumented aliens. We also used information from our structured interviews with Border Patrol agents. Additionally, we obtained environmental data that DHS and land managers used to determine funding for mitigation efforts related to environmental damage caused by certain DHS border fencing projects. Through our interviews with land managers and reviews of their data collection efforts, we analyzed the various ways that land managers have used data on cross-border illegal activity and its environmental impacts. This analysis included reviewing how land managers have used data to set priorities for and carry out cleanup and restoration work, reviewing the various types of environmental stewardship training provided by land managers to Border Patrol agents, reviewing numerous biological opinions related to Border Patrol activities, and documenting various ways land managers help ensure staff and visitor safety on federal lands. We corroborated these data by obtaining and reviewing them where possible. We conducted this performance audit from December 2009 to October 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Implementation of Sasabe biological opinion (jaguar, bat, and soil stabilization) Ariz. Implementation of Lukeville biological opinion (Sonoran pronghorn, bat conservation) Ariz. Tex. Ariz. Ariz. Calif. Tex. Calif. Ariz. Calif. Ariz. N.Mex. Ariz.-N.Mex. N.Mex. Tex. Ariz. Ariz. Ariz.-N.Mex. Ariz. Ariz. Ariz. Ariz. Ariz. Ariz. Ariz. N.Mex. N.Mex. N.Mex. , 2010. In addition to the contact named above, David P. Bixler, Assistant Director; Nathan Anderson; Ellen W. Chu; Charlotte Gamble; Rebecca Shea; Jeanette Soares; and Richard M. Stana made major contributions to this report. Also contributing to this report were Joel Aldape, Lacinda Ayers, Muriel Brown, and Brian Lipman.
Over the last 5 years, Border Patrol has nearly doubled the number of its agents on patrol, constructed hundreds of miles of border fence, and installed surveillance equipment on and near lands managed by the Departments of the Interior and Agriculture along the southwestern border. In so doing, the agency has had to comply with federal land management laws, and some have expressed concern that these laws may limit agents' abilities to detect and apprehend undocumented aliens. GAO was asked to examine (1) key land management laws Border Patrol operates under and how it and land management agencies coordinate their responsibilities under these laws; (2) how Border Patrol operations are affected by these laws; and (3) the extent to which land management agencies collect and use data related to the environmental effects of illegal activities, such as human trafficking and drug smuggling. GAO reviewed key land management laws, interviewed agents-in-charge at 26 Border Patrol stations responsible for patrolling federal southwest borderlands, and interviewed managers of these lands. When operating on federal lands, Border Patrol has responsibilities under several federal land management laws, including the National Environmental Policy Act, National Historic Preservation Act, Wilderness Act, and Endangered Species Act. Border Patrol must obtain permission or a permit from federal land management agencies before its agents can maintain roads and install surveillance equipment on these lands. Because land management agencies are also responsible for ensuring compliance with land management laws, Border Patrol generally coordinates its responsibilities under these laws with land management agencies through national and local interagency agreements. The most comprehensive agreement is a 2006 memorandum of understanding intended to guide Border Patrol activities on federal lands. Border Patrol's access to portions of some federal lands along the southwestern border has been limited because of certain land management laws, according to patrol agents-in-charge for 17 of the 26 stations, resulting in delays and restrictions in agents' patrolling and monitoring these lands. Specifically, patrol agents-in-charge for 14 of the 17 stations reported that they have been unable to obtain a permit or permission to access certain areas in a timely manner because of how long it takes for land managers to conduct required environmental and historic property assessments. The 2006 memorandum of understanding directs the agencies to cooperate with one another to complete, in an expedited manner, all compliance required by applicable federal laws, but such cooperation has not always occurred. For example, Border Patrol requested permission to move surveillance equipment to an area, but by the time the land manager conducted a historic property assessment and granted permission--more than 4 months after the initial request--illegal traffic had shifted to other areas. Despite the access delays and restrictions, 22 of the 26 agents-in-charge reported that the overall security status of their jurisdiction is not affected by land management laws. Instead, factors such as the remoteness and ruggedness of the terrain have the greatest effect on their ability to achieve operational control. Although 4 agents-in-charge reported that delays and restrictions have affected their ability to achieve or maintain operational control, they either have not requested resources for increased or timelier access or have had their requests denied by senior Border Patrol officials, who said that other needs were more important. While federal land managers in the borderlands region rely on Border Patrol to collect data on the extent of cross-border illegal activities on their lands, the extent of the land managers' data collection efforts on the effects of these illegal activities has varied. Some land managers monitor areas on a routine basis, some document environmental damage on an ad hoc basis, and still others collect no such data. Where collected, land managers have used these data for several purposes, including restoring lands and providing Border Patrol agents with environmental awareness training. With regard to training, most agents-in-charge wanted more-frequent, area-specific training to be provided by land managers. GAO recommends, among other things, that the Secretaries of Homeland Security, the Interior, and Agriculture take steps to help Border Patrol expedite access to portions of federal lands by more quickly initiating required assessments. In commenting on a draft of this report, the agencies generally agreed with GAO's findings and recommendations.
The Omnibus Budget Reconciliation Act of 1989 (P.L. 101-239, Dec. 19, 1989) authorized Medicare payment to RPCHs for inpatient and outpatient services. Program participation was limited to seven states and HCFA selected California, Colorado, Kansas, New York, North Carolina, South Dakota, and West Virginia. California has no certified RPCHs. RPCHs had to be located in rural counties and were limited to six inpatient acute care beds. Initially, RPCH inpatient stays were limited to a 72-hour maximum, but section 102 of the Social Security Act Amendments of 1994 (P.L. 103-432, Oct. 31, 1994) changed the requirement to an average of 72 hours during a cost-reporting year for periods beginning on or after October 1, 1995. RPCHs employ midlevel practitioners—physician assistants and nurse practitioners—working under the supervision of a physician, who is not required to be located at the RPCH. RPCHs are not allowed to provide surgery requiring general anesthesia but may perform surgeries normally done under local anesthesia on an outpatient basis at a hospital or ambulatory surgical center. We found few surgical procedures being performed at RPCHs during 1993-96. In September 1993, the first RPCH, located in South Dakota, was certified to participate in Medicare. As shown in table 1, there were 38 certified RPCHs as of August 1997. In addition to RPCHs, the Congress authorized a demonstration program for the operation of limited-service hospitals that was implemented by Montana. Under this program, Medicare was authorized to pay for basic emergency care, outpatient services, and limited inpatient care (maximum stay of 96 hours) provided at these limited-service hospitals, known as medical assistance facilities (MAF). In our October 1995 report, we found that the MAFs were important sources of emergency and primary care for their communities. MAFs primarily served patients with urgent but uncomplicated conditions and stabilized patients with more complicated needs before transferring them to full-service hospitals. Moreover, Medicare’s costs for inpatient care at MAFs were lower than if the care had been furnished in rural hospitals. While full-service hospitals normally are paid under Medicare’s PPS, both RPCHs and MAFs are paid on a cost reimbursement basis, as are CAHs, the replacement program for them. Like MAFs, CAHs are limited to 96-hour inpatient stays but can have 15 beds rather than the 6 for RPCHs. Both types of limited-service hospitals are scheduled to make the transition into CAHs by October 1, 1998. As envisioned when the program was authorized, most RPCH inpatients have less complex illnesses that do not require intensive or high-technology care. Patients with more extensive health needs who go to RPCHs are generally stabilized and transferred to larger acute care hospitals, another important service to the community. In addition, RPCHs often serve as the source of primary care for residents in their areas. The average stay for the 1,708 inpatients treated by RPCHs between September 1993 and May 1996 was 2.85 days. They were assigned to 137 different DRGs—9 surgical DRGs covering 11 cases and 128 medical DRGs covering 1,697 cases. Ten of the eleven surgical cases were from one RPCH located in South Dakota. A state official confirmed that this RPCH performs surgeries like those performed in ambulatory surgery centers that do not require general anesthesia. As we found when we reviewed services provided by MAFs in Montana, the three medical conditions most commonly treated by the RPCHs were pneumonia (247 cases), heart failure and shock (141 cases), and inflammation of the digestive canal (99 cases). Together these three conditions accounted for 29 percent of the 1,708 cases, which is similar to the 28 percent they represented in MAFs. Conditions classified as respiratory, circulatory, and digestive disorders accounted for 1,107 cases (65 percent) and 48 of the DRGs (35 percent) treated at RPCHs. (See app. II for a summary of inpatient DRGs treated at RPCHs.) During the period covered by our review, 163 of the 1,708 inpatients (9.5 percent) were transferred from an RPCH to an acute-care hospital. The average RPCH stay for these patients was 1.9 days. During calendar years 1993 through 1996, about 5.6 percent of Medicare inpatients at other rural hospitals in Kansas, North Carolina, South Dakota, and West Virginia were transferred to another hospital. The percentage of RPCH patients transferred is 4 percentage points higher because one function of an RPCH is to stabilize patients and prepare them for transfer to a facility if the treatment they need is beyond the scope of RPCH services. In addition to providing inpatient care, RPCHs provide local primary care for many Medicare beneficiaries. The 13 RPCHs treated more than 6,700 different Medicare beneficiaries during their latest available cost-reporting period and submitted more than 28,000 outpatient claims for services for these patients (see table 2). Outpatient services included visits with physicians and physician assistants, laboratory tests, influenza shots, colonoscopies, electrocardiograms, diagnostic radiology services, and emergency care. Medicare paid about $4.9 million for these outpatient services (see app. III for a summary of Medicare outpatient costs by RPCH by cost-reporting year). Medicare payments for the 1,545 beneficiaries who received all their inpatient care from an RPCH totaled about $4.6 million, a little over $1,000 per day (see app. III). The average length of stay for these beneficiaries was 2.95 days. As we found when we made a similar comparison for MAF inpatient costs, these costs compared favorably with the amount Medicare would have paid if those patients had been treated at rural PPS hospitals. Table 3 shows by RPCH and cost-reporting year the difference in payments to RPCHs comparing the payments that would have been made to rural and urban PPS hospitals. Overall costs at the 12 RPCHs covering 17 cost-reporting periods were about $404,000 more than the amount Medicare would have paid rural PPS hospitals. However, payments for treatment at the 12 RPCHs were about $207,000 less than the amount Medicare would have paid for treating the same conditions at urban hospitals. (See app. IV for individual RPCH cost comparisons to PPS payments.) Although RPCH costs are slightly higher (8.8 percent) than PPS payments to rural hospitals, RPCH costs would have been lower if claims included in our review had complied with the 72-hour maximum length-of-stay requirement in effect when these admissions occurred. About 21 percent of the 1,545 stays exceeded the 72-hour limit and had 630 inpatient days incurred after the third day. These days cost the Medicare program an estimated $612,000. Because of the way cost reimbursement works, not all the cost of these days would be saved by eliminating them. The fixed costs allocated to the days would be reallocated to the remaining days of care and paid by Medicare. However, variable costs should be reduced if hospitals complied with the 72-hour limit and should result in lower overall Medicare costs. We believe the effect of this would result in RPCH inpatient costs being less than similar inpatient costs in rural PPS hospitals. Under a 96-hour limit, which CAHs have under BBA, the costs associated with longer stays would still have been significant. About 8 percent of 1,545 inpatient stays included in our analysis would have exceeded the 96-hour limit. These stays had a total of 304 covered inpatient days after the fourth day. Payments for those days totaled an estimated $295,000. Turning to the cost to Medicare for patients who are transferred from RPCHs, regardless of what kind of hospital makes the transfer, all transfers result in higher cost to Medicare because two facilities receive payment for the same patient. Under PPS, the transferring hospital receives a per diem payment determined by dividing the PPS payment by the geometric mean length of stay associated with the patient’s DRG. The hospital from which the patient is finally discharged receives the full PPS payment for the patient’s DRG. When patients are transferred from RPCHs, the RPCH receives cost-based reimbursement for the patient, and the hospital from which the patient is finally discharged receives the full PPS payment. Medicare RPCH payments for the 163 beneficiaries who were initially treated at an RPCH and transferred to a full-service PPS hospital totaled about $322,000 (see app. III). These RPCH stays averaged 1.9 days. We estimate that these costs were about $148,000 (about $910 per case) greater than the amount Medicare would have paid an acute-care hospital in per diem payments if the patient had first gone to an acute-care PPS hospital for the same length of time. Appendix V lists the hospitals where patients were transferred. As of August 1997, 51 limited-service hospitals (38 RPCHs and 13 MAFs) were authorized to treat Medicare patients. Effective October 1, 1997, these limited-service hospitals were to start making a transition into a new nationwide program—the Medicare Rural Hospital Flexibility Program—and to be renamed CAHs. As the number of CAHs increase, it will become more important for HCFA to monitor the inpatient stay and physician certification requirements established by the Congress. HCFA had no established way of ensuring that RPCHs complied with the 72-hour length of stay limitation when it was in effect or to assess whether cases outside the limit met one of the allowable exceptions. As a result, HCFA did not know whether RPCHs complied with this requirement. As our work illustrates, when lengths of stay exceeded the limit, Medicare costs tended to be higher than if patients had gone to a rural PPS hospital. BBA’s successor program, CAHs, provides that the Medicare peer review organization (PRO) covering a CAH’s area can waive the 96-hour limit case by case after a request to review a case. The statute does not define the conditions that would warrant waiving the limit. We believe that the PRO review could serve as the mechanism for ensuring compliance with the length-of-stay limit. If intermediaries were instructed to limit payment on CAH cases to no more than 4 days, unless the claim were accompanied by a PRO waiver, CAHs would have an incentive to ensure that they stay within the limit unless circumstances warranted an exception. HCFA would need to define what these circumstances are for both CAHs and PROs. Medicare regulations state that the program pays for inpatient RPCH services only if a physician certifies that the individual may reasonably be expected to be discharged or transferred to a hospital within 72 hours (96 hours, effective October 1, 1997). The physician’s certification is maintained in the patient’s medical record. However, HCFA had not yet initiated a method to ensure compliance with this requirement. HCFA officials told us that the agency planned to have state facility survey personnel review compliance with the physician certification requirement when RPCHs were recertified for continued participation in the Medicare program. The officials said HCFA also plans to use this process for CAHs. The physician certification requirement is one way to help ensure that only the appropriate kinds of patients are admitted to CAHs and that the 96-hour limit is likely to be adhered to. HCFA needs to formally establish a mechanism for checking compliance with the physician certification provision. RPCHs were an important access point for inpatient and outpatient services for Medicare beneficiaries in rural areas. Medicare payments to RPCHs for inpatient stays were, however, somewhat higher than payments would have been to rural PPS hospitals to treat the same patients. A primary reason for this was that about 21 percent of the inpatient cases had lengths of stay that exceeded the 72-hour maximum in effect at the time, and 8 percent would have exceeded the 96-hour limit for CAHs. HCFA has not established a way to enforce the length-of-stay limit, and we believe one is needed to give CAHs an incentive to adhere to the limit. HCFA also needs to define for CAHs and PROs, which are authorized to grant waivers to the 96-hour limit, the conditions and circumstances under which it would be appropriate to waive the requirement. HCFA also has not established a way of checking compliance with the requirement that a physician certify that patients admitted to RPCHs, now CAHs, are expected to be discharged within the maximum allowed length-of-stay limit. Such a mechanism should reinforce the importance of the certification and its intent to ensure that only the appropriate kinds of patients are admitted. The Secretary of Health and Human Services (HHS) should direct the Administrator of HCFA to establish a mechanism for ensuring that CAHs do not receive payment for inpatient cases that exceed the 96-hour length-of-stay maximum unless the responsible PRO waives that limit and defines the conditions and circumstances under which it would be appropriate for PROs to waive the 96-hour limit. HCFA should also establish a method to ascertain compliance with the requirement that physicians certify that patients are expected to be discharged within 96 hours of admission. We provided HCFA an opportunity to comment on a draft of this report, but the agency was unable to provide us written comments in the time required. We did, however, discuss a draft with agency officials involved with the RPCH program and incorporated their comments as appropriate. This report was prepared under the direction of Thomas Dowdal, Senior Assistant Director. Please contact him or me at (202) 512-7114 if you have any questions. Others who made major contributions to his report include Robert Sayers, Jerry Baugher, Robert DeRoy, and Joan Vogel. Copies of this report are also being sent to appropriate House and Senate committees, the Director of the Office of Management and Budget, the Secretary of HHS, the Inspector General of HHS, and the Administrator of the Health Care Financing Administration. Our objectives were to develop information on the cases treated and inpatient and outpatient services performed at RPCHs, the relative cost of providing inpatient health care services to Medicare beneficiaries at RPCHs and acute-care hospitals, and compliance with the physician certification and 72-hour inpatient stay requirement. We visited three RPCHs—two in North Carolina and one in South Dakota—and also contacted a fourth RPCH in South Dakota. From these RPCHs, we obtained information on the types of patients they treated, how they complied with the inpatient stay limitation and the physician certification requirements, and why stays at their RPCHs exceeded the 72-hour inpatient limitation. We also met with state rural health officials and state facility surveying personnel in North Carolina and South Dakota to obtain information on the RPCH program. We obtained automated cost and claim data for 15 RPCHs in Kansas, North Carolina, South Dakota, and West Virginia. Cost data were extracted from HCFA’s Health Care Provider Cost Report Information System (HCRIS), which includes selected data from hospital cost reports. Paid claims were provided by the four intermediaries—Kansas Blue Cross (Kansas), North Carolina Blue Cross (North Carolina), IASD Health Services Corporation (South Dakota), and Blue Cross of Virginia (West Virginia)—serving the RPCHs. We obtained inpatient and outpatient claims for each RPCH from the date certified through May 1996. Twelve of the 13 RPCHs submitted inpatient claims. All 13 RPCHs submitted outpatient claims. From the inpatient claims, we extracted data on the diagnoses and length of stay associated with Medicare patients admitted to RPCHs. In addition, we extracted the same data from HCFA’s Medicare Provider Analysis and Review (MEDPAR) file for Medicare patients admitted to RPCHs but whose claim was paid under the RPCH’s old hospital provider number. We also used MEDPAR to obtain data on Medicare patients transferred from an RPCH to an acute care hospital. For patients transferred to full-service hospitals, we obtained the name of the hospital they were transferred to and the diagnoses and length of stay. Using the cost report data, we estimated the costs for each RPCH Medicare inpatient stay. We then compared those costs with the amount Medicare would have paid an acute-care hospital under PPS for the same DRG at hospitals in the rural areas of the applicable states and the urban hospitals nearest to the RPCHs. We also computed the amount Medicare paid a PPS hospital and an RPCH when it transferred patients to an acute-care hospital. From the outpatient claims, we extracted data on the types of services provided to Medicare beneficiaries. For each RPCH cost year, we calculated the number of outpatient claims submitted and the number of Medicare beneficiaries treated by the RPCH. Because the 13 RPCHs in our analysis were certified at different times between September 8, 1993, and August 23, 1995, and had varying cost-reporting years, the cost report information we obtained covers different time periods for each facility, as identified in table I.1. We calculated the average inpatient operating costs per patient day for each RPCH’s cost-reporting period, excluding capital costs, by dividing operating costs (which includes routine and ancillary costs) by the number of Medicare days. We estimated the cost of treating each RPCH patient by multiplying the facility’s average daily Medicare cost by the number of days each patient was an inpatient. We calculated the PPS rates for the 1,708 RPCH inpatients in our analysis for hospitals in rural Kansas, North Carolina, South Dakota, and West Virginia and appropriate urban areas. We identified each patient’s DRG from the paid claim file and estimated the amount Medicare would have paid for each of these RPCH discharges in a rural and urban PPS hospital, using PPS payment rates in effect when the patient was discharged. Our estimate of PPS payments does not include payments for capital costs or any additional amounts that hospitals with teaching programs or a disproportionate share of low-income patients receive from Medicare. A total of 163 inpatients were treated at an RPCH and then transferred to a PPS hospital. We estimated Medicare’s cost of treating those patients at the RPCH in the same way we did for all patients—that is, by multiplying the RPCH’s daily Medicare cost by the number of days the patient was at the RPCH before being transferred. When an RPCH transfers a patient to a PPS hospital, the receiving hospital is paid the full DRG rate and the RPCH is paid its costs. PPS hospitals are reimbursed for the care provided to a patient who transfers to another hospital according to a per diem rate. This rate is obtained by dividing the PPS payment by the geometric mean length of stay expected for the patient’s DRG (this number is published annually with the DRG relative weights). We calculated the per diem PPS rate for each of the 163 transfer cases and multiplied that amount by the number of days each patient stayed at the RPCH before being transferred. The result of this calculation was the estimated payment that PPS hospitals would have received had the patient been treated at a PPS hospital for the same number of days that the patient was at the RPCH prior to being transferred. For each patient transferred, we compared the RPCH cost to what a rural PPS hospital would have been paid if it had transferred the patient. The result showed whether the treatment at the RPCH was more or less costly than treatment would have been for a transfer case at a rural PPS hospital. The cost report information obtained for the 13 RPCHs covers the cost-reporting periods identified in table I.1. All 13 RPCHs reported Medicare outpatient costs and submitted outpatient claims. We obtained the Medicare outpatient operating costs from HCRIS data for each RPCH, for each cost-reporting period. From the paid claims file we determined, for each RPCH cost-reporting period, the number of outpatient claims submitted and the number of different Medicare beneficiaries treated. We also identified the types of outpatient services being provided to Medicare beneficiaries. We did not evaluate the RPCHs’ compliance with the annual average 72-hour length-of-stay requirement that became effective for cost-reporting periods starting October 1, 1995. The RPCH cost reports available for our review covered RPCH cost-reporting periods beginning prior to October 1, 1995, when a maximum inpatient hospital stay requirement of 72 hours existed. Moreover, HCFA officials told us that they had not reviewed RPCHs’ compliance with either of the two (maximum or average) 72-hour requirements. We did not verify RPCHs’ compliance with the requirement that physicians certify that a Medicare patient can reasonably be expected to be discharged within 72 hours (changed by BBA to 96 hours) because this certification is entered on patient records maintained by RPCHs and it was not practical for us to review these records. Although HCFA has not reviewed RPCHs’ compliance with this requirement, HCFA officials told us the agency plans to require state facility survey personnel to determine physician compliance when they visit RPCHs as part of Medicare’s recertification process for continued participation in the program. This facility had no Medicare inpatient admissions. The data in the tables in this appendix are for urban and rural areas for 1,545 nontransferred inpatients who received all their care at RPCHs. RPCH costs higher (lower) compared with PPS payments for hospitals In (37,351) (41,607) (63,154) (67,969) (10,553) RPCH costs higher (lower) compared with PPS payments for RPCH costs higher (lower) compared with PPS payments for hospitals in surroounding areas Sioux Falls, South Dakota ($10,826) (92,743) (87,905) (49,443) (58,250) (43,748) (43,507) (28,665) (31,707) $446,206 (lower) compared with ($332,978) ($264,380) (341,544) (258,597) ($655,864) ($496,584) Transferred from RPCHs in Kansas Asbury-Salina Regional Medical Center, Salina, Kans. Central Kansas Medical Center, Great Bend, Kans. Duke University Medical Center, Durham, N.C. Halstead Hospital, Halstead, Kans. Hays Medical Center, Hays, Kans. Phillips Episcopal Memorial Medical Center, Bartlesville, Okla. St. Catherine Hospital, Garden City, Kans. St. Francis Regional Medical Center, Wichita, Kans. St. Joseph Medical Center, Wichita, Kans. St. Luke’s Hospital, Kansas City, Mo. Wesley Medical Center, Wichita, Kans. Western Plains Hospital, Dodge City, Kans. William Newton Memorial Hospital, Winfield, Kans. Transferred from RPCHs in North Carolina Pill County Memorial Hospital, Greenville, N.C. Roanoke Chowan Hospital, Ahoskie, N.C. Transferred from RPCHs in South Dakota McKennan Hospital, Sioux Falls, S.D. Queen of Peace Hospital, Mitchell, S.D. St. Luke Midland Regional Medical Center, Aberdeen, S.D. St. Mary’s Hospital, Pierre, S.D. St. Mary’s Hospital, Rochester, Minn. Sioux Valley Hospital, Sioux Falls, S.D. University of Minnesota Hospital and Clinic, Minneapolis, Minn. Transferred from RPCHs in West Virginia Aultman Hospital, Canton, Ohio Davis Memorial Hospital, Elkins, W.V. Fairmount General Hospital, Fairmount, W.V. Grafton City Hospital, Grafton, W.V. Monongalia General Hospital, Morgantown, W.V. Summersville Memorial Hospital, Summersville, W.V. United Hospital Center, Clarksburg, W.V. (continued) West Virginia University Hospital, Morgantown, W.V. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Rural Primary Care Hospital (RPCH) Program, focusing on: (1) assessing compliance with the requirements that RPCHs have an average length of stay of 72 hours or less and that physicians certify that inpatients are expected to be discharged within 72 hours; (2) assessing whether these two requirements affected the type of patients treated by RPCHs; and (3) comparing Medicare's cost for inpatient services in RPCHs to what those costs would likely have been in hospitals paid under the prospective payment system. GAO also looked at how the experience under the RPCH program could be used in implementing the expanded Critical Access Hospital (CAH) Program. GAO noted that: (1) RPCHs provide additional and, likely, much more proximate access to health care for Medicare beneficiaries residing in the rural areas where the facilities operate; (2) these facilities treat, on an inpatient basis, beneficiaries with less complex illnesses and furnish important stabilization and transfer services for those with more complex conditions; (3) moreover, RPCHs serve as the source of outpatient care ranging from primary to emergency care; (4) the 13 RPCHs for which complete data were available had 1,708 Medicare inpatient cases since they were certified to participate in the program; (5) the RPCHs provided the full inpatient stay for 1,545 beneficiaries who had less complex needs and stabilized and transferred an additional 163 beneficiaries to full-service hospitals; (6) the RPCHs treated primarily patients (65 percent of the total) who had respiratory ailments such as pneumonia, circulating system problems such as congestive heart failure, and digestive system illnesses such as inflammation of the digestive canal; (7) in addition, during the most recent cost-reporting period, these RPCHs provided more than 28,000 outpatient visits for more than 6,700 beneficiaries; (8) these outpatient visits ranged from those for primary care to emergency treatment for injuries; (9) Medicare payments for the 1,545 cases from September 1993 to May 1996 treated solely by an RPCH were slightly more than if these cases had been treated at full-service rural hospitals and somewhat less than if they had been treated at urban hospitals; (10) a primary reason why RPCH costs were higher than those for rural hospitals was that about 21 percent of the stays exceeded the 72-hour stay limitation in effect at the time; (11) without the extra inpatient days these cases involved, RPCH costs would likely have been lower than those for rural full-service hospitals; (12) the Health Care Financing Administration (HCFA) had not established a way to enforce the 72-hour maximum length-of-stay requirement for RPCHs, and it is important that the agency do so for the replacement CAH program's 96 hour maximum; (13) as is to be expected with limited-service hospitals, RPCHs in the four states GAO studied transferred a higher portion of patients to other hospitals than did full-service rural hospitals; and (14) total Medicare payments for the 163 transfer cases were about $148,000 higher than if a full service rural hospital had transferred the patients to another acute care hospital because of differences in the way payments are determined in the two situations.
SCSEP, as authorized under the OAA Amendments, promotes part-time opportunities in community service for unemployed low-income persons who are at least 55 years old and have poor employment prospects. The program is also designed to foster economic self-sufficiency by assisting older workers in transitioning to unsubsidized employment. Administered by Labor for over 30 years, the program operates in every state, the District of Columbia, Puerto Rico, Virgin Islands, American Samoa, Guam, and the Northern Mariana Islands. The program is administered through grants awarded to national organizations as well as state and territorial agencies. (See app. I for a listing of national grantees and funds and positions awarded in program year 2005.) In program year 2005, approximately $439 million was appropriated to support about 61,000 SCSEP positions, through which approximately 100,000 participants are served. (See app. II for a listing of funds and positions awarded by state in program year 2005). SCSEP serves unemployed persons who are 55 years or older whose family incomes are no more than 125 percent of the federal poverty level. Participants are placed in part-time community service assignments in a local nonprofit organization or public-sector agency to gain on-the-job experience and prepare for unsubsidized employment. Program participants receive training and work experience in a wide variety of occupations, including nurse’s aides, teacher aides, librarians, clerical workers, and day care assistants. Program participants are paid the highest federal, state, or local applicable minimum wage, or the prevailing rate of pay for persons employed in similar occupations by the same employer. The OAA Amendments require that at least 75 percent of SCSEP funds be used to subsidize participants’ wages and fringe benefits and no more than 13.5 percent of the funds may be used for administrative expenses. The remaining funds may be used for other program costs such as assessments, training, job placement assistance, and supportive services. The OAA Amendments were designed to make a number of changes to SCSEP. The amendments contained provisions to establish unsubsidized employment as a program goal, while maintaining the community service aspect of the program; establish a performance accountability system that held grantees accountable for meeting specific performance measures, including placement and retention of participants in unsubsidized employment, community services provided, customer satisfaction, and number of persons served—-particularly those with the greatest economic and social need or those with poor employment history or prospects, and those over age 60; improve coordination between SCSEP and WIA; and strengthen administrative procedures by defining administrative and program costs and applying uniform cost principles. In addition, the amendments revised the distribution formula by specifying that the first $35 million in funding above the amount to maintain current level of program year 2000 activities be allocated 75 percent to state grantees and 25 percent to national grantees. Any additional funds above $35 million will be allocated evenly between state and national grantees. The OAA Amendments have had little effect on the distribution of funds between national and state grantees, with the national grantees continuing to receive approximately 78 percent of the funding and state grantees about 22 percent. Since the amendments took effect in 2000, the SCSEP appropriation has experienced only minor fluctuations, and correspondingly, the total number of positions has remained largely constant. However, the distribution of funding and positions among national grantees has changed substantially. An open competition for national SCSEP positions held in 2002 increased the total number of national grantees from 10 to 13 (eliminating 1 incumbent grantee and introducing 4 new grantees) and reshuffled funding and positions among existing grantees. In program year 2005, national grantees operated in all states (including the District of Columbia and Puerto Rico) except Alaska, Delaware, and Hawaii. Approximately two-thirds of both national and state grantee positions are located in metropolitan areas. However, the percentage of positions in metropolitan areas varied widely among national grantees. For example, three national grantees administered more than 90 percent of their SCSEP positions in metropolitan counties, while two have about 40 percent of their positions in metropolitan counties. The revision of the funding formula outlined in the OAA Amendments has had little impact on the distribution of funds between national and state grantees. The formula takes effect only when SCSEP funding for national and state grantees rises above program year 2000 levels of approximately $423 million. Because the SCSEP appropriation has remained relatively constant over the past 5 years, the distribution of funds between national and state grantees has also experienced little change. In each program year since 2000, approximately 78 percent of the SCSEP funding for grantees was allocated to national grantees and 22 percent was allocated to state grantees (see fig. 1). For program year 2005, SCSEP appropriations funded 61,047 positions— 160 fewer than were funded in program year 2000. Slight funding increases from program years 2002 to 2004 provided for as much as $4.6 million in additional annual funding for national and state grantees. Labor allotted approximately 75 percent of this amount to state grantees and 25 percent to national grantees in accordance with the revised distribution formula. However, these funding increases did not markedly alter the overall distribution between national and state grantees. Labor’s 2002 open competition for the national grants portion of SCSEP funding increased the number of national grantees administering SCSEP and substantially reshuffled positions and funding among existing grantees. Labor decided to conduct the competition in order to ensure that the most qualified organizations were awarded grants, to open the grantee community to new organizations, and to provide better services to SCSEP participants. The competition—the first of its kind in SCSEP’s history—yielded 68 applications. A three-member Labor review panel evaluated each application and scored it according to the applicant’s plan for program design and services, coordination and oversight, and management structure and fiscal integrity. Based on these scores, Labor ranked each applicant, deemed that 13 applicants scored in a competitive range making them eligible to receive grant awards, and allotted positions by county to grantees on a winner-takes-all basis. Specifically, the highest ranked applicant received all the positions it requested, and each subsequent applicant received all positions not previously claimed by a higher-ranked applicant. All 13 competitive applicants were eventually awarded positions. The competition produced 4 new national grantees, increasing the total number from 10 to 13. One incumbent grantee, the National Urban League, was not awarded a grant to continue administering SCSEP. The competition also resulted in a significant reshuffling of funds and positions among incumbent grantees. Of the nine incumbent national grantees that were awarded continuing grants, two gained positions, and seven lost positions (see table 1). Labor determines the amount of funding to be allocated to grantees based on a “cost per authorized position” outlined in the OAA Amendments. As a result, following the 2002 competition, each of the 13 successful grantees received funding approximately equal to the number of positions it was awarded times $7,153—the pre-determined cost per authorized position. Among incumbent grantees, two gained additional funding and seven lost funding. AARP Foundation gained more than $24 million in additional funds, while Experience Works, Inc. lost $20.5 million in funding. Altogether, the four new grantees received approximately $54 million in SCSEP funding (see table 2). On March 2, 2006, Labor announced an open competition for program year 2006 national grantee funding. This announcement is consistent with Labor’s current proposal for the reauthorization of SCSEP, which recommends eliminating performance sanctions in favor of holding a competition for grants every 3 years. Using similar criteria to those used in the 2002 competition, Labor plans to award no more than 20 grants to national grantees, including at least 1 grant to an Indian and Native American organization and at least 1 grant to an Asian Pacific Islander organization. Labor is specifically seeking organizations that are able to foster partnerships with one-stop career centers and community colleges and that promote private employment through high-growth job opportunities. In order to increase program effectiveness and achieve economies of scale, Labor has consolidated the geographic areas over which grantees will administer SCSEP for the upcoming program year. When requesting positions, potential grantees must apply for at least 10 percent of a state’s allocation, or $1.6 million, whichever is greater. Furthermore, applicants that apply for more than one county in a state must request contiguous counties, and except in the cases of very large counties, they must apply for all the positions in a county. For program year 2005, slightly more than two-thirds of both national and state grantee positions are located in metropolitan areas. National grantees administer SCSEP in every state except Alaska, Delaware, and Hawaii, while state grantees operate SCSEP in all 50 states, the District of Columbia, and Puerto Rico. Individual national grantees operate in as many as 39 states (Experience Works, Inc.) and as few as 2 states (Mature Services, Inc.). The share of positions in metropolitan areas varies widely among national grantees. Three grantees administer more than 90 percent of their SCSEP positions in metropolitan counties, while two grantees have fewer than half of their positions in metropolitan counties (see table 3). Labor has taken steps to establish an enhanced performance accountability system for SCSEP, but has yet to implement some features fully. While Labor has introduced the new performance measures that the OAA amendments required, program year 2005—which ends on June 30, 2006—is the first year for which grantees will be held accountable for their performance. Labor has also implemented an early version of a data collection system to capture performance information, but the final version is not yet available to grantees in its intended online format. In addition, Labor has recently undertaken a broad assessment of SCSEP on such issues as participant outcomes, program costs, and grantee challenges, but has not yet issued a report. Labor has implemented new performance measures, as required by the OAA Amendments, and will begin sanctioning grantees that demonstrate poor performance for the current program year —2005 —which ends on June 30, 2006. After Labor issued final regulations for SCSEP in April 2004, it instituted practice measures for program year 2004, as grantees transitioned to the new data collection and reporting requirements. Labor used the resulting performance data to help set baseline goals for grantees to meet during program year 2005. For program year 2005, according to Labor, four SCSEP measures will contribute to a grantee’s overall performance assessment: Placement: the number of participants attaining unsubsidized employment, either full-time or part-time, for at least 30 days of the first 90 days after exiting the program, divided by the number of authorized SCSEP positions. Employment Retention: the rate of retention in unsubsidized employment 6 months after placement. Service Level: the number of a grantee’s participants divided by the number of the grantee’s authorized positions. Service to Most-in-Need: the percentage of participants who are at least 60 years old and who have at least one of several additional barriers to employment, such as language barriers, poor employment history, or a physical or mental disability. Labor officials told us they plan to assess grantees on their aggregate performance across these four SCSEP performance measures. A grantee satisfies its overall performance goal if it attains an average score across the four measures of at least 80 percent of the target goals. Thus, a grantee could meet its performance requirements by attaining less than 80 percent of some goals but more than 80 percent of the others. For example, Labor’s data show that one state achieved 47 percent of its placement goal but performed well enough on the other measures to receive an average score well above the 80 percent threshold for satisfactory performance. According to Labor, grantees varied in their ability to meet goals for individual measures during the transitional period of program year 2004. (See app. III for a listing of the program year 2004 results compared to the performance goals for each grantee.) However, Labor officials said that most grantees managed to meet the 80 percent threshold for their overall performance goal. (See appendix IV for results for each of the grantees.) They also stated that, based on Labor’s assessment of data from the first 2 quarters of the current year, most grantees appear to be on track for meeting their performance goals for program year 2005. Sanctions for poor performance are similar for state and national grantees and will begin after the first year of not meeting the 80 percent threshold for overall performance. If performance does not improve, sanctions will increase in severity after the second and third consecutive years. After the first year of poor performance, a grantee must submit a corrective action plan within 160 days of the end of the program year. In addition, Labor will provide the grantee with technical assistance to help correct the problem. A second consecutive year of failing to meet performance goals will generate a competition for 25 percent of the grantee’s funds for the following program year. If a grantee continues to perform poorly for a third year, another competition will result for the remaining amount of the grantee’s funding. Furthermore, in addition to meeting their own goals, national grantees must meet the performance goals of each state in which they administer the program. If they fail to meet the state’s goals, Labor will require a corrective action plan after the first year of poor performance and may take other appropriate actions, including transferring responsibility for the project to other grantees. National or state grantees that fall short of one performance target but otherwise meet their aggregate goals will not be subject to sanction; Labor will instead provide them with technical assistance related to that performance issue. In addition, Labor requires grantees to report on the customer satisfaction of participants, host agencies, and employers by surveying each group. While poor performance on this measure will result in technical assistance rather than sanctions, Labor officials told us that to date customer satisfaction has been very high. Grantees must also report the number of community service hours participants contribute, but Labor officials told us that they have struggled to create a measurable indicator for community service and do not plan to sanction performance in this area. SCSEP grantees must also collect data to support several common measures as part of a governmentwide initiative to provide comparable performance information across federal programs with similar goals and operations. For job training and employment programs serving adults, the three common measures include entered employment, retention, and average earnings. Thus, between the SCSEP measures and the common measures, grantees must collect and report on data for nine different performance measures. The SCSEP placement and retention measures overlap somewhat with the common measures for entered employment and retention, although the SCSEP measures, as defined by the OAA Amendments, are computed differently. (See table 4.) Specifically, the SCSEP placement measure is calculated relative to each grantee’s number of authorized positions, while the common measure for entered employment is based on the number of participants who exit the program. Likewise, the SCSEP retention measure evaluates employment 6 months after placement, while the common measure for retention assesses a participant’s employment in both the second and third quarters after exit. Grantees are not subject to sanction for performance on the common measures, which the Office of Management and Budget will use to evaluate the overall effectiveness of SCSEP. However, the administration’s legislative proposal for reauthorizing SCSEP supports using the common measures. Additional measures, such as community services provided, could be tracked as secondary outcomes. Labor has designed a data collection system to capture performance information, but has not yet implemented the Internet-based version. The agency is in the process of moving to an Internet-based system that incorporates the new performance data required under the OAA Amendments. In order to capture baseline performance data in program year 2004, Labor rolled out an early, non-Internet version of its data collection system in time to receive data from the first quarter of that program year. Although it collects the required performance data, this interim system is limited in its usefulness for helping to manage the program. For example, grantees are unable to access their quarterly progress reports directly and must wait for Labor to process and send the data to them. Likewise, grantees receive reports that notify them of errors in their data submissions, but the reports do not identify which records are problematic. Moreover, since the initial roll-out, Labor has incorporated several modifications to the system and required data reporting elements. Currently, grantees either use the early version of Labor’s new system or continue to use their own databases while they wait for the new Internet- based data collection system to undergo testing and be rolled out. If procurement and technical processes go as planned, Labor hopes to fully implement the Internet-based data collection system by mid-May 2006. Labor has provided grantees with guidance and technical assistance on implementing the new data collection system. In addition to issuing written guidance, Labor and its contractors have conducted demonstrations and offer ongoing direct assistance, including an Internet- based forum for grantee questions on implementing the new system. Labor recently undertook an assessment of SCSEP, which it has yet to complete. In 2004, Labor contracted with DAH Consulting, Inc., and Social Policy Research to conduct an assessment of SCSEP. According to Labor, in addition to assessing the ability of grantees to find useful community service assignments and increase placements in unsubsidized employment, the assessment was supposed to gather information on participant training, the level of coordination with the one-stop system, program costs, outcomes, and other challenges faced by grantees. However, this study was not intended to be a true impact evaluation, but rather a more general review of SCSEP program operations. As of March 2006, Labor officials had received a draft of the study but sent it back to DAH Consulting with requested changes. However, because Labor had not provided us with preliminary results from the review, as of the date of this testimony we are unable to describe what the assessment found, and cannot provide an evaluation of the methodology used to generate the report. Changes to SCSEP eligibility criteria and coordination difficulties with WIA and the one-stop system pose major challenges to SCSEP grantees in managing the program. Although the OAA Amendments did not contain provisions changing the eligibility criteria for SCSEP, Labor modified some eligibility criteria to target SCSEP’s limited funds to individuals it believes are most in need of SCSEP’s intensive services. For example, Labor modified the types of income it uses to determine an individual’s eligibility for the program to include Social Security Disability Insurance (SSDI) and unemployment compensation, so that only those with the lowest incomes are targeted. In addition, Labor changed its previous policy of allowing low-income older adults who work part-time to enroll in SCSEP, and revised the time period for which income is calculated. Most national and state grantees told us that these changes decreased the pool of eligible individuals, and were concerned that enrollments would decline as a result. Furthermore, the majority of the 13 national and 52 state grantees surveyed also identified coordinating with WIA providers, obtaining intensive and training services at one-stop centers, implementing Labor’s new data collection system, and meeting new performance measures as being major challenges to managing the SCSEP program. Labor estimated that SCSEP’s funding is only sufficient to serve less than one percent of the eligible population and, as a result, changed the eligibility criteria for SCSEP participation to target the program to those older adults it believes are most in need of program services. Labor issued guidance in April 2004 and again in January 2005 to reflect and clarify policy changes to SCSEP eligibility criteria that were previously established in guidance issued in December 1995. Major eligibility policy changes include what is to be counted as income, employment status at time of application, and the time period to be used for the purposes of calculating income. (See table 5.) While, the OAA Amendments do not define what constitutes income, Labor decided to use the U.S. Census Bureau’s Current Population Survey (CPS) as the standard for determining income eligibility for SCSEP. In the preamble to its April 2004 regulations, Labor set forth its intent to use the income categories collected in the CPS as the SCSEP definition of income for determining program eligibility. After receiving feedback from grantees, Labor decided to exclude certain forms of income. For example, Labor excluded disability benefits—except SSDI— as well as supplementary security income, workers’ compensation, public assistance, child support, and several other sources of income. Most national and state grantees we surveyed expressed concern with the revised income criteria. For example, one national grantee told us that including SSDI is especially onerous because individuals receiving SSDI are among the hardest to serve. A state grantee stated that SSDI should not be included in determining program eligibility because other disability benefits were not included in calculating income eligibility. Another state grantee noted that social security is the only source of income for many older adults and including it provides a misleading picture of an individual’s actual income. The administration’s proposal for the upcoming reauthorization of Title V of OAA contains provisions for standardizing the income threshold. Labor believes that reauthorization provides an opportunity for Congress to align SCSEP income eligibility criteria with those used by Labor and other federal programs that are means-tested. Labor noted that more uniformity with respect to the types of income used to determine program eligibility, such as Social Security benefits versus earned income, would increase public confidence that these programs were being administered in a consistent and equitable manner. Most national and state grantees surveyed were also concerned with Labor’s policy change requiring applicants to be unemployed at time of application. Labor officials stated that the Office of the Solicitor took a strict interpretation of the OAA Amendments and determined that applicants must be unemployed at the time of application to be eligible for SCSEP. Labor officials noted that this interpretation was consistent with the department’s philosophy that SCSEP should be targeted to those most in need of the program’s intensive services. Prior to the OAA Amendments, Labor permitted applicants who held part-time jobs and met other eligibility criteria to be eligible for SCSEP services. The OAA amendments retained the language contained in the statement of purpose from the authorizing legislation that the program was to provide services to unemployed low income adults 55 years and older. The amendments further defined eligible individuals as those individuals who are 55 years and older and have income not more than 125 percent of the poverty guidelines, but did not refer to employment status. Grantees told us that the requirement that applicants be unemployed prevented some low- income older workers from receiving SCSEP services. For example, a state grantee noted that older workers who may work only 4 hours per week have very low incomes but are not eligible for program services because they are not unemployed. Another state grantee noted that many older workers who are not eligible for social security benefits often work part-time, and thus would not be eligible under the employment test, but would otherwise still meet the income eligibility criteria. Many grantees were also concerned that Labor revised the period on which income is calculated. Prior to Labor’s regulations issued in 2004, grantees had the option of calculating income using either the includable income for the 12 months preceding application or annualizing the includable income for the 6 months preceding application, that is doubling the 6-month income to calculate an annual income. Labor now requires grantees to annualize an applicant’s income using the 6 months prior to application. Labor officials told us that changing the period on which income is calculated was intended to simplify the process and to reflect the most current income information. However, a national grantee and two state grantees noted in their survey responses that annualizing 6 months of income could distort income for those who only had earnings during that 6-month period. For example, a state grantee noted that many older individuals in their state work during the planting and harvesting seasons but are unemployed for the remainder of the year. They noted that doubling the individual’s 6-month income made many of these seasonal workers ineligible for SCSEP. Conversely, doubling 6-month earnings to calculate annual income can have the unintended consequence of including some individuals who would not otherwise be eligible for the program if a 12-month period was applied. National and state grantees surveyed also identified other issues that presented major challenges to managing the SCSEP program. The majority of both national and state grantees identified several issues in the survey as being great or very great challenges, in particular coordinating SCSEP activities with WIA services, obtaining intensive services and training at one-stop centers, implementing Labor’s new data collection system, and meeting performance measures (see fig. 2). Although the OAA amendments sought to strengthen coordination between SCSEP and WIA, national and state grantees surveyed identified the coordination of SCSEP activities with WIA services and obtaining intensive services and training at one-stops as major challenges. For example, several national and state grantees responded that many WIA providers are hesitant to provide intensive services or training to SCSEP participants because WIA providers are concerned that enrolling older adults would negatively affect their performance measures. Older adults who receive intensive services or training from WIA providers are included in the computation of WIA performance measures. Another state grantee stated that while coordination with one-stops for core services is very good, access to training is very difficult. We heard a similar theme among states we visited. For example, one state grantee we visited said that WIA is so performance-driven that few SCSEP participants are able to access intensive and training services under WIA. The reported lack of coordination between SCSEP and WIA is especially relevant in light of the administration’s proposal to increase the age of SCSEP eligibility from 55 to 65, with limited exceptions for those between the ages of 55 and 64. Labor believes that WIA, not SCSEP, should be the primary program for older adults age 55 to 64. However, we have previously reported that WIA has built in disincentives that discourage the providing of in-depth services, such as training, to older adults. We noted that the Bureau of Labor Statistics and the Census Bureau data suggest that older workers are 50 percent more likely to work part-time and less likely to become re-employed after being laid off than younger workers. These characteristics may negatively affect outcomes on certain WIA performance measures, and, as a result, create a barrier to enrolling older workers into WIA intensive services and training. While most of the 13 national and 52 state grantees surveyed also reported challenges with Labor’s new data collection system, they noted that the agency provided helpful assistance with system implementation. Several national and state grantees stated that implementation of the data system was both time and labor-intensive. In particular, one state grantee told us that Labor rolled out the data collection system prematurely, resulting in a loss of productivity at the grantee and subgrantee level. Despite these concerns, most grantees indicated that they received training or technical assistance for the system from Labor or its contractors. Moreover, while several national and state grantees provided positive comments about Labor’s assistance, with respect to staff responsiveness, others were less than satisfied and indicated the need for more assistance. All of the national grantees and most of the state grantees that cited meeting performance measures as a great or very great challenge in the survey indicated that the program eligibility changes had the greatest effect on the ability to meet the performance measure dealing with SCSEP service level. A number of state grantees mentioned that the greater difficulty in recruiting SCSEP participants translated into difficulty meeting the service level performance measure. Another of the state grantees that we visited said that the service level measure would present the greatest challenge because the income guidelines were too restrictive. According to Labor data, 7 of the 13 national grantees and 21 of the 52 state grantees did not meet their service level goals for program year 2004. Labor officials noted that some of the grantees who were concerned with low enrollments may not perform sufficient outreach or marketing. The aging of the baby boom generation presents serious challenges for the nation’s workforce investment system. The expected increase in the number of low-income older adults means that, more and more, older Americans will have to continue working in order to have sufficient income. Older adults often have difficulty re-entering the labor force and may rely on federal employment and training programs to help them find employment, with SCSEP being the only federal employment and training program targeted exclusively to low-income older adults. While Labor has made progress implementing the OAA Amendments—particularly in terms of increasing the program’s focus on unsubsidized employment— challenges remain. More specifically, while Labor has taken steps to establish an enhanced performance accountability system, as of March 2006 the system has still not been fully implemented. The delay in implementing this system means that program year 2005 is the first year that grantees will be held accountable for poor performance. In this respect, given the upcoming reauthorization of the OAA, only limited data will be available to assess SCSEP performance. In addition, while Labor’s changes to the eligibility criteria seem to have resulted in SCSEP funds being more targeted to those it believes are most in need of program services, one aspect of how this targeting was operationalized may have produced mixed outcomes. In particular, the requirement for grantees to double an applicant’s income from the most recent 6-month period could have the unintended result of excluding some individuals with very low incomes from the program while including others with much higher incomes, depending on when the work was performed. Those who are excluded from participation in SCSEP may turn to other employment and training programs such as WIA. However, given the problems older adults often experience in obtaining in-depth services such as training, it is unclear whether the existing workforce system is able to provide the type and level of services this population may need. Thus, while the OAA amendments were designed to enhance employment and training opportunities for older adults, we believe that Labor has not done enough to address unresolved issues concerning coordination between SCSEP and WIA, and helping older adults obtain intensive and training services at one- stop centers. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information regarding this testimony, please contact me at (202) 512-7215. Jeremy Cox, Wayne Sylvia, Rebecca Woiwode, Drew Lindsey, and Stuart Kaufman were key contributors to this testimony. The following baseline performance data for SCSEP grantees are from benchmark program year 2004 (July 1, 2004, to June 30, 2005). According to the Department of Labor, four SCSEP measures will contribute to a grantee’s overall performance in program year 2005, the first year for which grantees will be held accountable for their performance. The following measures are used: Placement: the number of participants attaining unsubsidized employment, either full- or part-time, for at least 30 days of the first 90 days after exiting the program, divided by the number of authorized SCSEP positions. Employment Retention: the rate of retention in unsubsidized employment 6 months after placement. Service Level: the number of a grantee’s participants divided by the number of the grantee’s authorized positions. Service to Most-in-Need: the percentage of participants who are at least 60 years old and who have at least one of several additional barriers to employment, such as language barriers, poor employment history, or a physical or mental disability. These figures were provided by the Department of Labor and are included in this testimony for contextual purposes only. GAO has not verified the accuracy or reliability of these data. These figures were provided by the Department of Labor and are included in this testimony for contextual purposes only. GAO has not verified the accuracy or reliability of these data. Older Workers: Labor Can Help Employers and Employees Plan Better for the Future. GAO-06-80. Washington, D.C.: December 5, 2005. Workforce Investment Act: Labor and States Have Taken Actions to Improve Data Quality, but Additional Steps Are Needed. GAO-06-82. Washington, D.C.: November 14, 2005. Redefining Retirement: Options for Older Americans. GAO-05-620T. Washington, D.C.: April 27, 2005. Older Workers: Policies of Other Nations to Increase Labor Force Participation. GAO-03-307. Washington, D.C.: February 13, 2003. Older Workers: Employment Assistance Focuses on Subsidized Jobs and Job Search, but Revised Performance Measures Could Improve Access to Other Services. GAO-03-350. Washington, D.C.: January 24, 2003. Older Workers: Demographic Trends Pose Challenges for Employers and Workers. GAO-02-85. Washington, D.C.: November 16, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The aging of the baby boom generation and increased life expectancy pose serious challenges for our nation. Older adults often must re-enter the workforce in order to remain self-sufficient. The Senior Community Service Employment Program (SCSEP) is the only federal program that is specifically designed to assist low-income older adults by providing part-time community service jobs and training to prepare for employment. Since passage of the 2000 Older Americans Act Amendments (OAA), SCSEP has also increasingly focused on promoting economic self-sufficiency through placement in unsubsidized employment. In 2005, Congress appropriated about $439 million to serve about 100,000 older workers. Administered by the Department of Labor (Labor), SCSEP is implemented through 69 grantees, including 13 national organizations and 56 state and territorial agencies. The Chairman of the Senate Special Committee on Aging asked GAO to (1) determine what effect the OAA Amendments have had on the distribution of SCSEP funds to national and state grantees, (2) describe the progress Labor has made in implementing the enhanced performance accountability system, and (3) identify the challenges faced by national and state grantees in managing the SCSEP program. The 2000 OAA Amendments have had little impact on the distribution of funds between national and state grantees, with national grantees continuing to receive approximately 78 percent of the funding and states about 22 percent. However, the distribution of funding among national grantees has changed substantially as a result of Labor's 2002 open competition for the national grants portion of SCSEP funding. Labor has taken steps to establish an enhanced performance accountability system for SCSEP, but has yet to implement some features. For example, Labor introduced the new performance measures required by the OAA Amendments, but program year 2005--which ends on June 30, 2006--is the first year that grantees will be held accountable for meeting their goals. Labor has implemented an early version of a data collection system to track grantee performance, but the final Internet-based version is not yet available. Changes to the SCSEP eligibility criteria and difficulties coordinating with the Workforce Investment Act (WIA) one-stop system have posed challenges to SCSEP grantees. Labor modified some eligibility criteria to target limited program funds to individuals it believes are most in need of SCSEP services. However, grantees expressed concern that these changes had made it more difficult for them to meet their enrollment goals. Finally, GAO found that despite provisions in the OAA Amendments to strengthen connections between SCSEP and WIA, problems persist in coordinating with WIA providers and obtaining intensive and training services for older workers at one-stop centers.
Under its Civil Works program, the Department of Defense’s (DOD) U.S. Army Corps of Engineers plans, constructs, operates, and maintains a wide range of water resources projects. In addition to its headquarters in Washington, D.C., the Corps has eight regional divisions and 38 districts that carry out its domestic civil works responsibilities (see fig. 1). Corps headquarters primarily develops policies and plans the future direction of the organization; divisions coordinate the districts’ projects; and the districts plan and implement the projects, which are approved by the divisions and headquarters. Water resource projects are generally very large undertakings that often take more than a single fiscal year to complete. Moreover, the timing of these projects is often dictated by weather conditions or environmental concerns. For example, many dredging projects take place during the winter months because environmental concerns limit dredging operations during the spring and summer (March through September) to protect various species, such as threatened and endangered turtles. Congress appropriates about $5 billion annually to the Corps to carry out its Civil Works program. Federal agencies generally receive annual appropriations (also called fiscal year or 1-year appropriations) that are made for a specified fiscal year. These appropriations are available for obligation—legal commitment by the government for the payment of goods and services ordered or received—only for the bona fide needs of the fiscal year for which they were appropriated. If an agency fails to obligate its annual funds by the end of the fiscal year for which they were appropriated, the funds cease to be available to the agency for new obligations. They are referred to as “expired” and, after 5 years, are returned to the U.S. Treasury. In contrast, the Corps receives “no-year” appropriations through the Energy and Water Development Appropriations Act—that is, there are no time limits on when the funds may be obligated or expended, and the funds remain available for their original purposes until expended. The majority of the Corps’ Civil Works appropriations are generally directed to two types of activities: (1) operations and maintenance and (2) construction. Operations and maintenance activities include the preservation, operation, and maintenance of existing rivers and harbors. Construction activities include construction and major rehabilitation projects related to navigation, flood control, water supply, hydroelectric power, and environmental restoration. The Corps’ fiscal years 2007 and 2008 quarterly reports to Congress on continuing contracts awarded with the new clause contained inaccurate information. According to these reports, the Corps awarded 21 new continuing contracts during this time: 9 for construction and 12 for operations and maintenance, ranging in value from $2.1 million to $341.5 million, for a total of about $811 million. However, we found that some continuing contracts were double-counted, while others were omitted from the reports. For example, two contracts were first reported to Congress as new continuing contracts at the end of fiscal year 2007. The Corps then reported the same two contracts in the first quarter of fiscal year 2008, marking them as “not reported” in the prior fiscal year. In addition, we identified two continuing contracts totaling approximately $48 million that should have been included as new awards in the Corps’ quarterly reports but were omitted. Corps officials confirmed that these were indeed new continuing contracts that should have been included in the reports. Both types of errors impacted the total number and value of the continuing contracts with the new clause that were reported to Congress as having been awarded during this 2-year period. We also identified other types of errors that did not affect the overall totals of new contracts or their value but, nevertheless, raise questions about the accuracy of the information that the Corps is providing to Congress. For example, when we asked Corps officials in one district to verify information about the continuing contracts they had awarded in fiscal years 2007 and 2008, they provided us with documentation that showed that one contract that had been incorrectly included in the Corps’ quarterly report to Congress as a continuing contract was actually a fully funded contract. In addition, four new continuing contracts were not initially reported as new in the quarterly reports covering their award periods; instead, three were reported in a later quarterly report and one was reported earlier. Similarly, we found that two fully funded contracts were incorrectly included in the 2007 quarterly reports as existing continuing contracts. The Corps’ failure to accurately report to Congress the number of continuing contracts it awards is a problem that we previously identified in 2006, and at that time, we recommended that the Corps develop an appropriate tracking system for these contracts. Although the Corps concurred at the time, Corps officials told us that the agency had not developed a tracking system as we had recommended because it believed its system of asking divisions to provide information on a quarterly basis was sufficient for tracking the use of continuing contracts. These officials also told us that the agency had issued a 2007 guidance document that provided instructions to the districts for making submissions for the quarterly reports to Congress. In light of the inaccuracies we identified in the quarterly reports to Congress, we do not believe that the Corps quarterly data calls constitute a systematic tracking system for continuing contracts that we recommended in 2006; therefore, we believe that our 2006 recommendation has not yet been implemented by the agency. According to our Standards for Internal Control in the Federal Government, managers are to “complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention.” We believe that the Corps’ inaction on our recommendation has led to a lack of internal controls that has contributed to persistent errors on the part of the agency in reporting to Congress on its use of continuing contracts. As a result of the limits that Congress has placed on the Corps’ use of continuing contracts in recent years, the Corps has issued guidance and made several modifications to its policies that govern the Civil Works program. While these changes, taken together, have resulted in a decrease in the number of continuing contracts that the Corps has awarded, they have not significantly affected the agency’s ability to execute its Civil Works program. Specifically, the committee report accompanying the Corps’ fiscal year 2005 appropriations expressed concern about the Corps’ use of continuing contracts and noted that the purpose of continuing contracts was to enable the Corps, in awarding contracts for the components of large construction projects, to take advantage of economies of scale and efficiently manage these large components over several years. In enacting the Energy and Water Development Appropriations Act of 2006, Congress provided specific direction to the Corps regarding its use of continuing contracts. The law states, among other things, that with certain exceptions, none of the funds made available in the act may be used to award any continuing contract, or make modifications to any existing continuing contract, that commits an amount for a project in excess of the amount provided for the project. To help ensure that it met these new congressional requirements, the Corps issued guidance in fiscal year 2006 that, among other things, directed that districts use fully funded contracts as their primary contracting option and that continuing contracts be used only as the contracting option of last resort; summarized new information that the districts are required to provide in their requests to use continuing contracts, including an explanation of why using a continuing contract is in the best interest of the government; and directed districts to take measures to ensure that contractor costs do not exceed the amount provided for projects. Also, in response to these new congressional requirements, that same year, the Corps developed a new clause for continuing contracts that specifically required contractors to stop work once they had expended the funding set aside for the fiscal year. In addition, the Corps established certain criteria for the use of continuing contracts for operations and maintenance projects. The Assistant Secretary of the Army for Civil Works preapproved certain requests for operations and maintenance continuing contracts if the contracts met five conditions. In response, the Corps issued guidance to the divisions reiterating these conditions. These conditions included that (1) the contract was financed from the Corps’ operations and maintenance account and (2) the work could not be broken down into smaller increments that could be fully funded within the current fiscal year. The following fiscal year, the Corps also established certain criteria for continuing contracts for construction projects. Specifically, using continuing contracts for construction activities was to be considered only if the contract was for more than $10 million and the work could not be completed in a single fiscal year. The Corps also required that requests for using continuing contracts, other than continuing contracts that had been preapproved, be approved at the Assistant Secretary level. While the Corps quarterly reports to Congress cannot be fully relied on for accurate information on the number and value of continuing contracts awarded with the new clause, they do provide a reasonable sense of the overall direction of the use of such contracts. In 2006, we reported that the Corps, on average, awarded about 500 continuing contracts per year for fiscal years 2003 through 2005. The 2007 and 2008 quarterly reports to Congress indicate that the number of continuing contracts with the new clause has reduced considerably and may average only about 10 per year for fiscal years 2007 and 2008. The decreased use of continuing contracts, and the use of the new clause, does not appear to have significantly affected the Corps’ Civil Works program. While the Corps has not established metrics to evaluate the impacts of this change, district officials we spoke with told us that they believe that the new continuing contracts clause has had little, if any, impact on their ability to accomplish the Civil Works mission of the agency. For example, several Corps district officials we interviewed said that while there were some temporary difficulties in executing their projects when the new clause was first implemented, their ability to conduct their work has not been adversely affected. Specifically, these officials told us that the combined effect of the requirement to fully fund contracts and the lack of sufficient funds in 2007, when the new clause was first implemented, led them to award fewer contracts at that time, and some project starts were delayed until the following fiscal year. These officials did not provide any examples, however, of where work on a project was stopped because funds were not available. Since that time, however, they have adjusted to the changes and have resumed their normal level of contract activity. Corps officials also told us that, in general, the recent changes, including the new clause, have had some positive effects on contract management, including the following: Contracts that are fully funded, as well as continuing contracts that use the new clause, provide officials more certainty in managing their funds. For example, the Corps no longer has to search for funds each year to meet the obligations created when contractors would work after the amount appropriated for a fiscal year was exhausted. Contract management has become easier for Corps officials, whether they fully fund contracts or use continuing contracts with the new clause, because fewer contract modifications are likely, and the contractor is restricted to the work specified in the contract. Notwithstanding these positive effects, some district officials also told us that having the flexibility to use continuing contracts as they were previously used, as opposed to fully funding contracts, would be useful for some large, longer-term projects, such as lock and dam projects, which require millions of dollars and multiple fiscal years to complete. According to these officials, if such projects are fully funded, large amounts of unexpended appropriations would be carried over for several fiscal years. For example, a 5-year, $200 million contract that required only $75 million in its first year would require carrying over the remaining $125 million into subsequent fiscal years until the funds were expended. Since the $125 million would already have been obligated to the contract at award, it would not be available to be used on other contracts. If such projects were funded using continuing contracts as they were previously used, the Corps would allocate the entire contract amount at the time of award, but would obligate only the amount of funds that would be needed to cover the first year of the contract. The remaining funds not needed during the first year would be available to be used on other contracts. As a result, these officials told us that the restrictions placed on the use of continuing contracts in recent years may have made execution of some projects somewhat less efficient and more costly, although they could not provide us any specific examples of this having occurred. Corps headquarters officials generally disagreed with this position. According to these officials, over time, the Corps could complete the same number of projects even if they were fully funded, as opposed to using continuing contracts as they were previously used. Corp headquarters officials did tell us, however, that there is some value in having the ability to use continuing contracts as they were previously used for a few projects. Specifically, as previously used, continuing contracts obligated the Corps for the full amount of the contract at the date of the award. According to these officials, in practical terms, this means that the contractor does not have to wait for the Corps to provide the money in order to make large investments, such as ordering prefabricated materials and buying raw materials like steel. This flexibility on timing realized under the previous use of continuing contracts therefore provided contractors the ability to reap the benefits of economies of scale when purchasing materials in bulk. In implementing the new continuing contracts clause, the Corps did not comply with a legal requirement and, as a result, some districts are reluctant to use it when awarding contracts. Specifically, the Corps has been using the new continuing contracts clause prior to its publication in the Federal Register for public comment, in violation of section 22 of the Office of Federal Procurement Policy Act (OFPP Act), 41 U.S.C.§ 418b. This section of the act generally provides that no procurement regulation relating to the expenditure of appropriated funds that has a significant effect beyond the internal operating procedures of the agency or a significant cost or administrative impact on contractors or offerors may take effect until 60 days after the procurement regulation is published for public comment in the Federal Register. This requirement for advance comment may be waived if urgent and compelling circumstances make compliance impracticable; in such cases, a procurement regulation shall be effective on a temporary basis if a notice of the regulation is published in the Federal Register stating that it is temporary and providing for a public comment period of 30 days. After considering the comments received, the agency may issue the final procurement regulation. Courts have held that the failure to comply with section 22 renders the proposed procurement regulation without effect. In spring 2006, the Corps waived the requirement to obtain advance comments on the new clause based on urgent and compelling circumstances and sent a request for publication of the clause to the Department of the Army. The Corps is required to obtain approval from the Department of the Army and DOD prior to publication of a change that has a significant effect beyond the internal operating procedures of the agency, such as the new continuing contracts clause. Over the intervening months and years, the Corps has submitted multiple iterations of the request for publication to the Army. These requests have moved among the Army, DOD, and the Office of Management and Budget, but the Corps has not yet received confirmation of approval from DOD, and the clause has never been published in the Federal Register. The Corps’ use of the new clause for more than 3 years prior to its having been published in the Federal Register for public comment does not meet the requirements of section 22 of the OFPP Act. The Corps’ argument that its use of the new clause complies with the statute because it has been pursuing publication through the Army and DOD as required is, in our view, unpersuasive. The relevant provision states that new procurement regulations may only take effect if a Federal Register notice “is published,” not while publication is being pursued. The Corps’ interpretation also ignores the requirement for a minimum public comment period of 30 days after the notice is published—to date, no public comment period whatsoever has been provided. Corps officials from the districts and divisions with whom we spoke expressed concern about the Corps’ use of the new clause without its having been published in the Federal Register. According to these officials, because this legal requirement has not been met, they are concerned that using the new clause could subject the Corps to legal challenges such as bid protests. Such potential legal challenges could prolong projects and increase their costs. We identified three solicitations for continuing contracts with the new clause issued from fiscal years 2006 through 2008 that did result in bid protests. These protests alleged, among other things, that the Corps’ use of the new continuing contracts clause prior to providing an opportunity for public notice and comment violated the OFPP Act. These protests were withdrawn when the Corps reissued the solicitations without the new clause, using instead such options as fully funding the contracts and restructuring the work required by the contracts. (See app. II for details about the three bid protests.) Some district officials where the solicitations that were protested originated said that they are concerned that such legal challenges could resurface in the future—jeopardizing other contracts that use the new clause and delaying the award of these contracts. Moreover, officials in one district that has not used the new continuing contracts clause since it became available told us that the fact that the new clause has never been published for comment constituted their major reason for not using it. Although Congress and GAO have raised a number of concerns in recent years about the Corps’ use of continuing contracts, of particular note has been the agency’s lack of accurate information on the number and value of contracts that it has awarded. In 2006, we specifically recommended that the Corps establish a system to track its use of continuing contracts, and while the agency agreed with this recommendation, it has failed to implement it. As a result, the process and guidance it relies on to provide quarterly information to Congress are ineffective and continue to generate information that is neither complete nor accurate. Moreover, the Corps developed and implemented its new continuing contracts clause over 3 years ago, but its use of the clause does not comply with the publication requirements of the OFPP Act. The Corps’ position that its use of the new continuing contracts clause while “pursuing publication” of the clause in the Federal Register satisfies the requirements of the act is unpersuasive. While we understand that the Corps has been seeking approval to publish the clause since 2006, and that it is unable to publish the clause without approval from the Army and DOD, the statute’s publication requirement and its waiver provision clearly permit temporary use of such a clause only if it is actually published in the Federal Register for public comment. The Corps’ use of the clause prior to publication does not comply with the statute’s requirements and may leave the Corps susceptible to further legal challenges. To ensure that the Corps provides accurate and reliable reports to Congress on its use of continuing contracts and complies with federal procurement law, we recommend that the Secretary of Defense direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers to take the following three actions: Establish adequate internal controls to ensure accurate and complete information is collected and reported to Congress on the use of continuing contracts. Suspend the Corps’ use of the new continuing contracts clause until it has been published in the Federal Register, in accordance with 41 U.S.C. § 418b. Provide regular updates to Congress on the progress of these actions. We provided a draft of this report to the Department of Defense for official review and comment. The department concurred with two of our recommendations and did not concur with one. Specifically, the department concurred with our recommendations that the Corps establish adequate internal controls to ensure accurate and complete information is collected and reported to Congress on the use of continuing contracts; and provide regular updates to Congress. The department did not agree, however, with our recommendation that the Corps suspend use of the new continuing contracts clause until it has been published in the Federal Register in accordance with § 41 U.S.C. 418b. The department did not disagree with our conclusion that its use of the new clause prior to publication violates the law, and acknowledged that the unforeseen delay in publishing the clause is undesirable. The department also stated that it intends to publish the new clause in the Federal Register as expeditiously as possible and anticipates approval of the clause for publication within 60 days. While we agree with the department’s efforts to expedite the publication of the new clause in the Federal Register, we continue to believe that suspending the use of the clause in the interim would be the appropriate course of action. This is because until the clause is published in the Federal Register for a minimum public comment period of 30 days, the department’s use of the clause will violate section 22 of the OFPP Act. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. A joint explanatory statement accompanying the fiscal year 2008 Consolidated Appropriations Act directed us to review the continuing contracts that the U.S. Army Corps of Engineers (Corps) has awarded using a new clause. More specifically, from 1922 to 2005, the Corps had the authority to award multiyear contracts (called continuing contracts) without having received appropriations to cover the full contract amount. These continuing contracts allowed contractors to continue working on a project after funds provided for that project had been expended. In 2006, as part of its changes associated with continuing contracts, the Corps created two new clauses—a “special” clause and an “incrementally funded” clause that require contractors to stop work on a project once they have expended the funding set aside for the fiscal year. According to Corps counsel, however, the agency considers only contracts with the special clause to be continuing contracts because the incrementally funded clause does not involve a future funding obligation. For the purpose of this review, we have referred to the special clause as the “new clause.” To determine the accuracy of the information the Corps reported to Congress in fiscal years 2007 and 2008, we compared information from the Corps’ quarterly reports on the number, type, and dollar value of continuing contracts that used the new clause with information obtained from a Corps database and results of interviews with Corps officials in selected divisions and districts. We identified the continuing contracts that the Corps listed as new awards on the basis of the information the Corps presented in its summary letters to Congress, as well as the information contained in the quarterly reports themselves. We then identified continuing contracts with the new clause that were awarded during the 2-year time frame but were missing from the Corps’ quarterly reports by querying the Corps’ Primavera database. We did not assess the reliability of the Primavera database, but we verified information from that database independently using both testimonial and documentary evidence provided by the Corps. In addition, we interviewed Corps officials in selected divisions and districts to corroborate the information on continuing contracts that we obtained from the quarterly reports and Primavera database. We selected a nonprobabilty sample of two of the eight divisions and 6 of the 38 districts that carry out the Corps’ domestic civil works responsibilities. More specifically, we selected the one division that had used continuing contracts with the new clause the most and the other division that had used them the least. In addition, of the six districts, two had used continuing contracts with the new clause the most, two had used them the least, and the remaining two had been involved with bid protests associated with the new clause. We also ensured that those districts and divisions varied geographically and in program size. Specifically, we selected the Mississippi Valley and South Pacific Divisions, as well as the Los Angeles (South Pacific Division), Nashville (Great Lakes and Ohio River Division), Philadelphia and New York (North Atlantic Division), Vicksburg (Mississippi Valley Division), and Walla Walla (Northwestern Division) districts. We obtained pertinent supporting documentation from the divisions and districts to support the testimonial information obtained during the interviews. To obtain information about the extent to which the Corps’ use of continuing contracts with the new clause may have affected its execution of the Civil Works program and the extent of the Corps’ use of continuing contracts with the new clause, we interviewed Corps division and district officials at the locations identified above, as well as at Corps headquarters. In addition, we interviewed the Corps manager at headquarters responsible for the quarterly reports to obtain basic information for assessing the reliability of those data. Although there were inaccuracies, we found that the data were sufficiently reliable for the purposes of our report. During the interviews, we discussed, among other things, Corps guidance on continuing contracts, the process used to obtain approval to use continuing contracts, any impacts and challenges related to the Corps’ use of continuing contracts, and monitoring the use of continuing contracts. To assess the Corps’ process for implementing the new continuing contracts clause, we reviewed relevant federal procurement laws related to the Corps’ issuance and use of the new continuing contract clause. In addition, we interviewed selected district and division officials to obtain their views on the issue. We also interviewed selected district and division officials to understand the process that the Corps used to develop and implement the new continuing contracts clause and obtain their views on the issue. We also contacted the Corps’ Office of the Chief Counsel to obtain the Corps’ legal position on the extent to which the Corps has met the requirements of federal procurement law, and reviewed its response and supporting documentation. Finally, we examined three bid protests, and the Corps’ responses to these protests, concerning solicitations issued from fiscal years 2006 through 2008, that alleged, among other things, that the new clause was not published in the Federal Register as required by law. We conducted this performance audit from September 2008 to June 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. From fiscal years 2006 to 2008, the U.S. Army Corps of Engineers (Corps) received bid protests for three of its solicitations for continuing contracts with the new clause. A bid protest may be filed when a bidder or other interested party has reason to believe that a contract has been or is about to be awarded improperly or illegally, or that the bidder or interested party has been unfairly denied a contract or an opportunity to compete for a contract. One firm protested three solicitations that would have awarded contracts with the new clause. In each case, the firm withdrew its protest after the Corps restructured the statement of work, issued an amendment to remove the new clause from the solicitation, and proceeded to award the contract as a contract with a different funding mechanism, such as a fully funded contract, rather than as a continuing contract. Specifically, the firm filed initial protests with three districts that issued solicitations with the new clause—San Francisco, New York, and Philadelphia. The firm alleged several bases for its protests; however, the overarching issue in the protests, which generally used the same language, was the Corps’ inclusion of the new clause in the solicitations. The firm alleged the following: Inclusion of the new clause rendered the specifications defective because it made the project schedule and duration so vague and indefinite that potential bidders could not compete intelligently and on an equal basis. The firm argued that bidders would make different assumptions involving different contingencies and might not be bidding to perform the same scope of work and that, as a result, the Corps would be precluded from determining whether the lowest bid received represented the lowest cost to the government of performing the work required. The Corps’ attempt to use the new clause violated 41 U.S.C. § 418b and Federal Acquisition Regulation Subparts 1.3 and 1.5, which require the clause to be published in the Federal Register for public comment. When the Corps receives a bid protest, the respective division office responds on behalf of the district whose solicitation is being protested. Of the three districts that received bid protests, only the division for the San Francisco District formally denied the protest. The May 22, 2006, decision by the Assistant Chief Counsel/Division Counsel for the South Pacific Division, among other things, denied the allegation that the clause rendered the specifications defective and stated that any assumptions a contractor may choose to make with regard to schedule, funding streams, delays, and so forth would necessarily be reflected in the bid prices. As a result, the Corps argued, as long as the bid was not unbalanced and was otherwise the lowest price, it would also be the lowest cost to the government. asserted that the Corps used the new clause prior to publishing it in the Federal Register for public comment due to “urgent and compelling circumstances,” and explained that the Corps was in the process of submitting the clause to the Federal Register for public comment through its internal procedures. After the Corps’ South Pacific Division denied the agency-level protest, the protester filed a protest with GAO on May 31, 2006. Subsequently, the Corps’ San Francisco District decided to remove the new clause from the solicitation, and the protester withdrew its protest on June 15, 2006. Similarly, the protests filed with the New York and Philadelphia Districts resulted in the districts’ removing the new clause from the solicitations. The protester subsequently withdrew its protest in both cases. In all three protests, the Corps districts then used a different funding mechanism to complete the work. Table 1 describes the projects and shows relevant dates and estimated amounts. In addition to the individual named above, Vondalee R. Hunt (Assistant Director), Tania L. Calhoun, Nancy L. Crothers, Diana C. Goody, Daniel J. Semick, and Delia P. Zee, made key contributions to this report. Also contributing to this report were Joel I. Grossman, Carol M. Henn, and William T. Woods.
The U.S. Army Corps of Engineers (Corps) has had the authority to award multiyear contracts--continuing contracts--without having received appropriations to cover the full contract amount. In 2006, Congress limited the Corps' use of such contracts by prohibiting obligations made in advance of appropriations. In response, the Corps developed a new clause that stopped work once funding for a fiscal year was expended. GAO was mandated to examine (1) the accuracy of the Corps' fiscal years 2007 and 2008 quarterly reports to Congress about continuing contracts that included the new clause, (2) the extent to which the Corps' use of continuing contacts with the new clause may have affected its execution of the Civil Works program during this time, and (3) the extent to which the Corps followed legal procedures in implementing the new clause. To conduct this work, GAO reviewed Corps documents, such as its quarterly reports and bid protests, federal procurement laws, and interviewed officials. The Corps' quarterly reports to Congress for fiscal years 2007 and 2008 about continuing contracts with the new clause were inaccurate. According to the reports, the Corps awarded 21 new continuing contracts during fiscal years 2007 to 2008: 9 for construction and 12 for operations and maintenance, ranging in value from $2.1 million to $341.5 million, for a total value of about $811 million. However, GAO found that some continuing contracts were double-counted, while others were missing from the reports. GAO also found other types of errors, such as a fully funded contract that was incorrectly included in the quarterly report as a continuing contract. These errors raise questions about the accuracy of the reports. GAO identified similar inaccuracies in the Corps' quarterly reports during its 2006 review and at that time recommended that the Corps develop a tracking system to monitor its use of these contracts. While the Corps believes its system of asking divisions to provide information on a quarterly basis is sufficient for tracking continuing contracts, GAO disagrees. Without a tracking system supported by sufficient internal controls to ensure accuracy, errors can persist in the information provided to Congress. The Corps' use of the new clause has generally not affected the agency's ability to execute its Civil Works program. The Corps decreased its use of continuing contracts beginning around the time that the new clause was initiated. However, while acknowledging that the transition to the new clause created some initial difficulties that have since been overcome, Corp officials did not provide any examples of work being stopped on a project because funds were not available. The Corps did not comply with a legal requirement in implementing the new clause, resulting in some districts' reluctance to use it. Section 22 of the Office of Federal Procurement Policy Act (OFPP Act) generally provides that no procurement regulation that has a significant effect beyond the internal operating procedures of the agency or a significant cost on contractors or offerors may take effect until 60 days after the procurement regulation is published for comment in the Federal Register. This requirement may be waived in urgent and compelling circumstances; however, the regulation must still be published in the Federal Register stating that it is temporary and providing for a public comment period of 30 days. Although the Corps has requested approval since 2006 from the Department of the Army and the Department of Defense, as it is required to, the clause has never been published and the Corps has continued to use it. GAO believes that the Corps' argument that its pursuit of publication satisfies the statute is unpersuasive. Moreover, GAO spoke with Corps officials from districts and divisions who expressed concern about using the clause prior to its publication. Specifically, they are concerned that using the clause could subject the Corps to legal challenges, such as bid protests, and that such potential challenges could delay projects and increase their costs.
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business. It is especially important for government agencies, where maintaining the public’s trust is essential. The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have changed the way our government, the nation, and much of the world communicate and conduct business. However, without proper safeguards, systems are unprotected from individuals and groups with malicious intent to intrude and use the access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. This concern is well-founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, the steady advance in the sophistication and effectiveness of attack technology, and the dire warnings of new and more destructive attacks to come. Computer-supported federal operations are likewise at risk. Our previous reports and those of agency inspectors general describe persistent information security weaknesses that place a variety of federal operations at risk of disruption, fraud, and inappropriate disclosure. Thus, we have designated information security as a governmentwide high-risk area since 1997, a designation that remains today. Recognizing the importance of securing federal agencies’ information and systems, Congress enacted the Federal Information Security Management Act of 2002 (FISMA) to strengthen the security of information and systems within federal agencies. FISMA requires each agency to use a risk-based approach to develop, document, and implement a departmentwide information security program for the information and systems that support the operations and assets of the agency. Congress created FDIC in 1933 to restore and maintain public confidence in the nation’s banking system. The Financial Institutions Reform, Recovery, and Enforcement Act of 1989 sought to reform, recapitalize, and consolidate the federal deposit insurance system. The act designated FDIC as the administrator of two funds responsible for protecting insured bank and thrift depositors—BIF and the SAIF. The act also designated FDIC as the administrator of the FSLIC Resolution Fund, which was created to complete the affairs of the former FSLIC and liquidate the assets and liabilities transferred from the former Resolution Trust Corporation. On February 8, 2006, the President signed into law the Federal Deposit Insurance Reform Act of 2005. Among its provisions, the act calls for the merger of the BIF and SAIF into the DIF. FDIC completed this merger on March 31, 2006. In managing these funds, the corporation has an examination and supervision program to monitor the safety of deposits held in member institutions. FDIC insures deposits in excess of $4 trillion for its 8,693 member institutions. FDIC had a budget of about $1.06 billion for calendar year 2006 to support its activities in managing the funds. For that year, it processed almost 21 million financial transactions. FDIC relies extensively on computerized systems to support its financial operations and store the sensitive information that it collects. Its local and wide area networks interconnect these systems. To support its financial management functions, the corporation relies on the NFE and several financial systems that process and track financial transactions, including premiums paid by its member institutions and disbursements made to support operations. Other systems maintain personnel information for employees, examination data for financial institutions, and legal information on closed institutions. At the time of our review, there were about 5,629 users on FDIC systems. Federal law delineates responsibilities for the management of computer systems at FDIC. Under FISMA, the Chairman of FDIC is responsible for, among other things, (1) providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the agency’s information systems and information; (2) ensuring that senior agency officials provide information security for the information and information systems that support the operations and assets under their control; and (3) delegating to the agency’s Chief Information Officer the authority to ensure compliance with the requirements imposed on the agency under FISMA. Two deputies to the Chairman—the Chief Financial Officer and Chief Operating Officer—also have information security responsibilities. The Chief Financial Officer is responsible for the preparation of financial statements and ensures that they are fairly presented and demonstrate discipline and accountability. The Chief Financial Officer is part of a senior management group that oversees the NFE. The group receives monthly system progress updates from the NFE project team. The Chief Operating Officer is responsible for planning, coordinating, evaluating, and improving programs and resource management. He is also in charge of the Chief Information Officer, who is responsible for developing and maintaining a departmentwide information security program and for developing and maintaining information security policies, procedures, and control techniques that address all applicable requirements. The objectives of our review were to assess (1) the progress FDIC has made in correcting or mitigating remaining information system control weaknesses reported as unresolved at the time of our prior review in 2005 and (2) the effectiveness of the corporation’s information system controls for protecting the confidentiality, integrity, and availability of financial and sensitive data. An integral part of our objectives was to support the opinion on internal control in GAO’s 2006 financial statement audit by assessing the degree of security over systems that support the generation of the FDIC funds’ financial statements. Our scope and methodology was based on our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized data. Focusing on FDIC’s financial systems and associated infrastructure, we evaluated the effectiveness of information security controls that are intended to prevent, limit, and detect access to computer resources (data, programs, and systems), thereby protecting these resources against unauthorized disclosure, modification, and use; provide physical protection of computer facilities and resources from unauthorized use, espionage, sabotage, damage, and theft; prevent the exploitation of vulnerabilities; prevent the introduction of unauthorized changes to application or system ensure that work responsibilities for computer functions are segregated so that one individual does not perform or control all key aspects of computer-related operations and thereby have the ability to conduct unauthorized actions or gain unauthorized access to assets or records without detection; and ensure the implementation of secure and effective configuration management. In addition, we evaluated aspects of FDIC’s information security program as they relate to NFE. This program includes assessing risk; developing and implementing policies, procedures, and security plans; promoting security awareness and providing specialized training for those with significant security responsibilities; testing and evaluating the effectiveness of controls; planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies; detecting, reporting, and responding to security incidents; and ensuring the continuity of operations. To evaluate FDIC’s information security controls and program, we identified and examined pertinent FDIC security policies, procedures, guidance, security plans, and relevant reports provided during fieldwork. In addition, we conducted tests and observations of controls in operation and reviewed corrective actions taken by the corporation to address vulnerabilities identified during our previous review. We also discussed with key security representatives, system administrators, and management officials whether information system controls were in place, adequately designed, and operating effectively. We performed our review at the FDIC computer facility in Arlington, Virginia, from September 2006 through February 2007. Our review was performed in accordance with generally accepted government auditing standards. FDIC has taken steps to address security control weaknesses. The corporation has corrected or mitigated 21 of the 26 weaknesses that we previously reported as unresolved at the completion of our calendar year 2005 audit (see app. I). For example, the corporation has developed and implemented procedures to prohibit the transmission of mainframe user and administrator passwords in plaintext across the network, established and implemented a process to monitor and report on vendor- supplied account/password combinations, and improved mainframe security monitoring controls. While the corporation has made important progress in strengthening its information security controls, it is still in the process of completing actions to correct or mitigate the remaining five previously reported weaknesses. These uncorrected actions include ensuring that only authorized application software changes are implemented, limiting network access to sensitive personally identifiable and business proprietary information, effectively generating and reviewing the NFE audit reports, adequately controlling physical access to the Virginia Square building, and properly segregating incompatible system-related functions, duties, and capacities for an individual associated with the NFE. Not addressing these actions could leave the corporation’s sensitive data vulnerable to unauthorized access and manipulation. Appendix I describes the previously reported weaknesses in information security controls that were unresolved at the time of our prior review and the status of the corporation’s corrective actions. Although FDIC made substantial improvements to its information system controls, unresolved and newly identified weaknesses could limit its ability to effectively protect the confidentiality, integrity, and availability of its financial and sensitive information and information systems. Specifically, we identified new weaknesses in controls related to (1) e-mail security, (2) physical security, and (3) configuration management. Although these control weaknesses do not pose significant risks of misstatement to the financial reports, they do increase the risk to FDIC’s financial and sensitive systems and information and increase the risk of unauthorized modification of data and programs, inappropriate disclosure of sensitive information, or disruption of critical operations. E-mail is perhaps the most popular system for exchanging business information over the Internet or any other computer network. Because the computing and networking technologies that underlie e-mail are widespread and well-known, attackers are able to develop attack methods to exploit security weaknesses. E-mail messages can be secured in various ways including the use of digital signatures. Digital signatures can be used to ensure the integrity of an e-mail message and confirm the identity of its sender. National Institute of Standards and Technology (NIST) guidance recommends that organizations consider the implementation of secure e- mail technologies such as digital signatures to ensure the integrity of e- mail data. FDIC policy requires individual division managers to establish specific procedures regarding the use of secure e-mail technologies for e- mail. FDIC did not use secure e-mail methods to protect the integrity of certain accounting data transferred over an internal communication network. The corporation relied upon unsecured e-mail transmission of accounting data instead of using more secure methods, such as securing e-mail with digital signatures or using the internal data transmission functions in NFE. Specifically, it did not use secure e-mail correspondence during monthly NFE closing processes because the Division of Finance—the division responsible for the financial environment—had not developed requirements for securing e-mail. In addition, the e-mail system could be compromised by sending e-mails using forged sender names and addresses. As a result, increased risk exists that an attacker could manipulate accounting data. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls involve restricting physical access to computer resources, usually by limiting access to the buildings and rooms in which the resources are housed, and periodically reviewing access granted to ensure that it continues to be appropriate. FDIC policy also requires that visitors be allowed to enter an office only after providing proof of identity, identifying the person they are visiting, signing a visitor log, obtaining a visitor badge, and being escorted at all times by the employee whom they are visiting. FDIC did not apply physical security controls for some instances. For example, an unauthorized visitor was able to enter a key FDIC facility without providing proof of identity, signing a visitor log, obtaining a visitor’s badge, or being escorted. In addition, a workstation that had access to a payroll system was located in an unsecured office. As a result, increased risk exists that unauthorized individuals could gain physical access to a key facility and to systems that have sensitive information. Configuration management involves the identification and management of security features for all hardware, software, and firmware components of an information system at a given point and systematically controls changes to that configuration during the system’s life cycle. The agency should have configuration management controls to ensure that only authorized changes are made to such critical components. In addition, all applications and changes to those applications should go through a formal, documented systems development process that identifies all changes to the baseline configuration. Also, procedures should ensure that no unauthorized software is installed. Patch management, a component of configuration management, is an important element in mitigating the risk associated with software vulnerabilities. Up-to-date patch installations help mitigate vulnerabilities associated with flaws in software code that could be exploited to cause significant damage. FDIC policy requires that patches be implemented within the specified time frames. In addition, FDIC policy states that configuration status accounting and configuration auditing, which includes both functional and physical audits, should be performed. Configuration audits help to maintain the integrity of the configuration baseline as well as to ensure that when a significant product change is introduced, only authorized changes are being made. FDIC policy also states that project documentation should be managed and updated as it evolves over time. FDIC did not consistently implement configuration management controls for NFE. Specifically, the corporation did not develop and maintain a complete listing of all configuration items and a baseline configuration for NFE, including application software, data files, software development tools, hardware, and documentation; ensure that all significant system changes, such as parameter changes, go through a change control process; apply comprehensive patches to system software in a timely manner. For example, a FDIC report stated that in the third quarter of fiscal year 2006, software patches for 15 out of 21 high-risk vulnerabilities and 5 out of 34 medium-risk vulnerabilities were not implemented within required time frames. In another report, between July 9, 2006, and October 9, 2006, out of nine high-risk patches that were not implemented within the required time period, eight were not implemented for 42 days. review status accounting reports, or perform complete functional and physical configuration audits; and update or control documents to reflect the current state of the environment and to ensure consistency with related documents. Specifically, documents such as the NFE security plan, risk assessment, and contingency plan did not reflect the current environment. The NFE project team did not institute the above because it did not always consistently follow the processes as outlined in the NFE configuration management plan. According to FDIC officials, they were not following the plan because it has not been updated to reflect the new system development life cycle. In addition, according to an FDIC official, patches were not implemented in the specified time frames because contractors do not always follow FDIC policy. As a result, the corporation has a higher risk that NFE may not perform as intended. Although FDIC had taken steps to develop, document, and implement a corporate information security program, it did not fully implement key control activities for NFE. For example, FDIC had not sufficiently assessed risks, updated the security plan, reported computer security incidents, or updated the contingency plan to reflect the current environment for NFE. Identifying and assessing information security risks are essential steps in determining what controls are required. Moreover, by increasing awareness of risks, these assessments can generate support for the policies and controls that are adopted in order to help ensure that they operate as intended. Security testing and evaluation can be used to efficiently identify system vulnerabilities for use in a risk assessment. NIST guidance states that the risk assessment should be updated to reflect the results of the security test and evaluation. The risk assessment for NFE was not properly updated. FDIC performed a security test and evaluation after the risk assessment was performed. However, the risk assessment was not updated to include the risks associated with any of the newly identified vulnerabilities. As a result, NFE may have inadequate or inappropriate security controls that might not address the system’s true risk. A security plan provides an overview of the system’s security requirements and describes the controls that are in place—or planned—to meet those requirements. Common security controls are controls that can be applied to one or more organizational information systems. System-specific controls are the responsibility of the information system owner. NIST guidance states that system security plans should clearly identify which security controls have been designated as common security controls and the individual responsible for implementing the common security control. In addition, NIST guidance states that organizations should update information system security plans to address system/organizational changes. The corporation did not update the system security plan for NFE. FDIC has identified 77 management, operational, and technical common security controls established in its information system. However, the NFE security plan was not updated to clearly identify common security controls. In addition, the security plan was not updated to reflect the correct servers or recently installed mainframe hardware. As a result, increased risk exists that proper controls may not be implemented for the NFE. Even strong controls may not block all intrusions and misuse, but organizations can reduce the risks associated with such incidents if they take steps to promptly detect and respond to them before significant damage is done. In addition, analyzing security incidents allows organizations to gain a better understanding of the threats to their information and the costs of their security-related problems. Such analyses can pinpoint vulnerabilities that need to be eliminated so that they will not be exploited again. FISMA requires that agency information security programs include procedures for detecting and reporting security incidents. NIST guidance states that organizations should implement an incident handling capability for security incidents that includes preparation, detection and analysis, containment, eradication, and recovery. In addition, NIST guidance states that organizations should regularly review and analyze information system audit records for indications of inappropriate or unusual activity, investigate suspicious activity or suspected violations, report findings to appropriate officials, and take necessary actions. FDIC policy requires all users of the corporate information systems to report suspected computer security incidents to the Computer Security Incident Response Team (CSIRT). FDIC has implemented an incident handling program, including establishing a team and associated procedures for detecting, responding to, and reporting computer security incidents. However, the corporation did not always review events occurring in the NFE to determine whether the events were computer security incidents or not. For example, during our observation of the purchase order matching process, an FDIC official overrode a matching exception. Although an override exception matching report was generated, it was not reviewed to determine if it was an incident, and was not forwarded to CSIRT. According to an official, there were not always procedures to review events in NFE. As a result, increased risk exists that computer security incidents that relate to the NFE will not be identified. Continuity of operations, which includes disaster recovery planning, should be designed to ensure that when unexpected events occur, essential operations continue without interruption or can be promptly resumed, and critical and sensitive data are protected. These controls include procedures to minimize the risk of unplanned interruptions, along with a well-tested plan to recover critical operations should interruptions occur. FISMA requires that agencies have plans and procedures to ensure the continuity of operations for information systems that support the operations and assets of the agency. NIST guidance states that disaster recovery plans, including contingency plans, should be maintained in a ready state that accurately reflects system requirements, procedures, and organizational structure. FDIC has developed plans for the continuity of NFE operations. To assess the effectiveness of the plans, FDIC successfully tested the NFE at its new disaster recovery site. However, the NFE contingency plan was not updated to reflect the new disaster recovery site. In addition, the plan identified servers that were not in use. As a result, FDIC has limited assurance it will be able to efficiently implement continuity of operations for the NFE in the event of an emergency when knowledgeable employees are not available. FDIC has made substantial progress in correcting previously reported weaknesses and has taken other steps to improve information security. Although five weaknesses from prior reports remain unresolved and new control weaknesses related to (1) e-mail security, (2) physical security, and (3) configuration management were identified, the remaining unresolved weaknesses previously reported and the newly identified weaknesses did not pose significant risk of misstatement in the corporation’s financial statements for calendar year 2006. However, the old and new weaknesses do increase preventable risk to the corporation’s financial and sensitive systems and information. Since FDIC did not fully integrate its NFE into its information security program, it did not fully implement key control activities for NFE, such as sufficiently assessing risks, updating the security plan, reporting computer security incidents, or updating the contingency plan to reflect the current environment. Continued management commitment to integrating the NFE into the corporate information security program will be essential to ensure that the corporation’s financial and sensitive information will be adequately protected. As the corporation continues to enhance the NFE, its reliance on controls implemented in this single, integrated financial system will increase. Until FDIC fully integrates NFE into the security program, its ability to maintain adequate information system controls over its financial and sensitive information will be limited. In order to sustain progress to its program, we recommend that the FDIC Chief Financial Officer and Chief Operating Officer direct that the following 12 actions be performed in a timely manner: Require that e-mail containing or transmitting accounting data be secured to protect the integrity of the accounting data. Train security personnel to implement the corporation’s policy on physical security of the facility. Instruct FDIC personnel to lock rooms that contain sensitive software. Develop a configuration item index of all configuration items for NFE using a consistent and documented naming convention. Require that significant changes to the system, such as parameter changes, go through a formal change management process. Implement patches in a timely manner. Require that the NFE project team review status accounting reports and perform complete functional and physical configuration audits. Adequately control the NFE documents so that they are up-to-date and accurately reflect the current environment. Update the NFE risk assessment to include the risk associated with vulnerabilities identified during security testing and evaluation. Update the NFE security plan to clearly identify all common security controls. Develop procedures to review events occurring in the NFE to determine whether the events are computer security incidents. Update the contingency plan to reflect the new disaster recovery site and servers that are in use. We received written comments on a draft of this report from FDIC’s Deputy to the Chairman and Chief Financial Officer (these are reprinted in app. II). The Deputy acknowledged the benefit of the recommendations made as part of this year’s audit and stated that FDIC concurred with seven of our recommendations and has implemented or will implement them in the coming year. He also stated that FDIC partially concurred with our remaining five recommendations and has developed or implemented plans to adequately address the underlying risks that prompted these five recommendations, in some instances through alternative corrective actions. With regard to the five recommendations to which FDIC partially concurred, if the corporation adequately implements the corrective actions below, it will have satisfied the intent of our recommendations. Regarding our recommendation that FDIC require that e-mail containing or transmitting accounting data be secured to protect the integrity of the accounting data, the Deputy stated that by July 31, 2007, FDIC will ensure that the integrity of accounting data transmitted by e-mail is appropriately protected, and that it will evaluate the various exchanges of accounting information and identify and document where more secure communications are needed. Concerning our recommendation that FDIC instruct personnel to lock rooms that contain sensitive software, the Deputy stated that FDIC has conducted additional analysis on the software that had access to payroll information and has removed that software from the desktop. With regard to our recommendation that FDIC require that significant changes to the system, such as parameter changes, go through a formal change management process, the Deputy stated that by December 31, 2007, FDIC will have developed procedures that will include appropriate management of, and documentation standards for, parameter changes. Based on the Deputy’s comments, we have clarified our recommendation that FDIC update the NFE risk assessment to include the risk associated with vulnerabilities identified during security testing and evaluation. The Deputy stated that FDIC has since changed its process to require updates to the risk assessments when applications undergo major changes that affect the security of the system. Finally, with regard to the recommendation that FDIC develop procedures to review events occurring in the NFE to determine whether the events are computer security incidents, the Deputy stated that FDIC addressed this issue during the first quarter of 2007 when it established a formal process for monitoring and reviewing such events. In addition, FDIC plans to have documented procedures for elevating potential security violations to the incident handling team and for monitoring unusual events by August 31, 2007. We are sending copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member of the House Committee on Financial Services; members of the FDIC Audit Committee; officials in FDIC’s divisions of information resources management, administration, and finance; and the FDIC inspector general. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or by e-mail at wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. 1. Federal Deposit Insurance Corporation (FDIC) was using live data to support application 2. Personal firewall settings for corporate examiner laptop computers that were used for remotely connecting to the network were not adequately secured. Information Security: Federal Deposit Insurance Corporation Needs to Sustain Progress (GAO-05-487SU) 3. Procedures were not established to prevent processes running in supervisor state in one logical partition from accessing datasets stored in another partition. 4. Procedures were not in place to identify and effectively control risks caused by sharing critical system components between production and nonproduction LPARs (logical partitions). 5. Structured query language database server configurations for many of FDIC’s financial applications were not adequately secured. 6. Procedures have not been consistently followed for authorizing, documenting, and reviewing Information Security: Federal Deposit Insurance Corporation Needs to Improve Its Program (GAO-06-619SU) 7. FDIC did not always change vendor-supplied account/password combinations. 8. FDIC did not adequately control inactive user accounts. FDIC policy requires accounts that have not been used within 60 days be deleted. 9. FDIC transmitted mainframe user and administrator passwords in plaintext across the 10. FDIC did not adequately enforce password management restrictions. 11. FDIC access authorizations did not consistently support the access rights granted to New Financial Environment (NFE) users. 12. FDIC did not adequately control access to datasets containing sensitive data critical to the integrity of loss calculations used by the Division of Insurance. 13. FDIC did not effectively limit network access to sensitive personally identifiable and business proprietary information. X 14. FDIC did not securely configure Internet-accessible remote access to its information resources. 15. FDIC permitted the use of unencrypted network protocols on its UNIX systems. 16. FDIC did not securely configure an Oracle production database. 17. FDIC did not properly secure the Apache Tomcat server that hosts a production database used by the employee time and attendance system. 18. FDIC did not securely configure its workstations. 19. FDIC laptop computers had unnecessary wireless technologies enabled. 20. FDIC’s Blackberry Enterprise Server and handheld devices were deployed and configured with several security weaknesses. Audit and monitoring of security-related events 21. FDIC did not effectively generate NFE audit reports or review them. 22. FDIC’s ability to monitor changes to critical mainframe datasets was inadequate. 23. FDIC did not sufficiently audit system activities on its Oracle databases. 24. FDIC did not adequately control physical access to the Virginia Square computer processing facility. 25. FDIC did not properly segregate incompatible system-related functions, duties, and capacities for an individual associated with the NFE. 26. FDIC granted NFE accounts payable users inappropriate access to perform incompatible functions. In addition to the individual named above, William F. Wadsworth, Assistant Director; Verginie A. Amirkhanian; Daniel D. Castro; Patrick R. Dugan; Edward Glagola Jr.; Mickie E. Gray; David B. Hayes; Kaelin P. Kuhn; Duc M. Ngo; Tammi L. Nguyen; Eugene E. Stevens IV; Henry I. Sutanto; and Amos Tevelow made key contributions to this report.
The Federal Deposit Insurance Corporation (FDIC) has a demanding responsibility enforcing banking laws, regulating financial institutions, and protecting depositors. As part of its audit of the calendar year 2006 financial statements, GAO assessed (1) the progress FDIC has made in correcting or mitigating information security weaknesses previously reported and (2) the effectiveness of FDIC's system integrity controls to protect the confidentiality and availability of its financial information and information systems. To do this, GAO examined pertinent security policies, procedures, and relevant reports. In addition, GAO conducted tests and observations of controls in operation. FDIC has made substantial progress in correcting previously reported weaknesses in its information security controls. Specifically, it has corrected or mitigated 21 of the 26 weaknesses that GAO had reported as unresolved at the completion of the calendar year 2005 audit. Actions FDIC has taken include developing and implementing procedures to prohibit the transmission of mainframe user and administrator passwords in readable text across the network, implementing procedures to change vender-supplied account/passwords, and improving mainframe security monitoring controls. Although FDIC has made important progress improving its information system controls, old and new weaknesses could limit the corporation's ability to effectively protect the integrity, confidentiality, and availability of its financial and sensitive information and systems. In addition to the five previously reported weaknesses that are in the process of being mitigated, GAO identified new weaknesses in controls related to (1) e-mail security, (2) physical security, and (3) configuration management. Although these weaknesses do not pose significant risk of misstatement of the corporation's financial statements, they do increase preventable risk to the corporation's financial and sensitive systems and information. In addition, FDIC has not fully integrated its new financial system--the New Financial Environment (NFE)--into its information security program. For example, it did not fully implement key control activities for the NFE. Until FDIC fully integrates the NFE with the information security program, its ability to maintain adequate system controls over its financial and sensitive information will be limited.
In the early 1990s, DOD officials recognized that the proliferation of chemical, biological, and nuclear materials that could be used to develop WMD was a growing threat. A series of terrorist attacks highlighted by the 1995 Aum Shinrikyo sarin gas attack in Tokyo’s subway system heightened concerns about U.S. vulnerability to a terrorist attack involving WMD. Senior DOD leaders, supported by a Defense Science Board study, concluded that DOD was not properly organized to focus on nonproliferation and counterproliferation. On October 1, 1998, DTRA was established, with a budget of approximately $1.7 billion and almost 2,000 military and civilian personnel, to address all aspects of the WMD threat. The agency reports to the Under Secretary of Defense for Acquisition, Technology, and Logistics, with the Under Secretary of Defense for Policy providing input into several of DTRA’s programs. Additionally, DTRA responds to the Chairman of the Joint Chiefs of Staff pertaining to the agency’s support of military commanders. Table 1 provides data on DTRA’s budget and personnel since the agency’s inception. DTRA’s budget has increased by over $650 million (about 40 percent) since its establishment, of which over $450 million was due to increases in the funding of the Chemical and Biological Defense Program (CBDP). Total personnel at DTRA also have increased. DTRA is currently headquartered at Fort Belvoir, Virginia; maintains test facilities in the United States; maintains a Defense Nuclear Weapons School in New Mexico; and maintains permanent staff at other locations, including Germany, Japan, and the Russian Federation, as seen in figure 1. DTRA also maintains liaison officers at several locations, including the combatant commanders’ headquarters, the National Guard Bureau, and the Pentagon. DTRA was established in 1998 through the consolidation of three agencies and two programs, as shown in figure 2. The Defense Special Weapons Agency tested, analyzed, and provided assistance in developing new technologies for maintaining and modernizing the nation’s nuclear weapons. The agency also worked to counter the effects of the use of chemical and biological weapons against U.S. military bases and forces. The Defense Technology Security Administration managed the DOD license review process for the export of munitions and critical technologies that have both civilian and military applications. As part of this effort, the Defense Technology Security Administration oversaw U.S. satellites launched abroad. The On-Site Inspection Agency, established as a result of the Intermediate-Range Nuclear Forces treaty, carried out on-site inspections to verify that treaty implementation was done in accordance with all treaty requirements. Throughout the 1990s, the agency’s responsibilities were expanded as new treaties were ratified, and, in 2000, the agency was asked to support the United Nation’s mission to monitor and eliminate WMD in Iraq. The two additional programs included in DTRA’s formation dealt extensively with the threats posed by WMD and related materials. The Cooperative Threat Reduction (CTR) program implemented a congressionally mandated program to assist the nations of the former Soviet Union in securing and eliminating their WMD stockpiles. We have undertaken several reviews of the DTRA-managed CTR program. A list of our reports concerning the CTR program appears at the end of this report. In addition, CBDP was established in 1994 to consolidate, coordinate, and integrate the chemical and biological defense requirements of all the services into a single DOD program. DTRA was given the responsibility to administer the distribution of program funds, but the agency did not directly manage the program. To integrate these components, DTRA began a strategic planning process in January 1999 and published its first strategic plan in March 2000. DTRA used the principles of the Government Performance and Result Act of 1993 (GPRA) to guide its planning process. The act calls for agencies to develop long-term strategic plans, annual performance plans, and annual assessment reports. Also in 2000, DTRA realigned itself around four core functions (1) threat control, (2) threat reduction, (3) combat support (support to military forces), and (4) technology development. Among these core functions, DTRA officials have stressed combat support as its first priority. Three major changes have occurred in the agency’s responsibilities, as illustrated in figure 2. First, in August 2001, responsibility for the export license review process shifted from DTRA to the reestablished Defense Technology Security Administration. According to senior officials, the export license review process did not integrate well with other DTRA functions and was more appropriately placed under the Under Secretary of Defense for Policy. Second, in March 2003, DTRA was assigned the mission to support the elimination of WMD materials found in Iraq. Third, in April 2003, DTRA was given the responsibility for managing the CBDP’s science and technology program rather than just overseeing the funds disbursement. DTRA carries out its mission to address the threat posed by WMD through four core functions: (1) threat control, (2) threat reduction, (3) combat support, and (4) technology development. First, the agency controls the threat of WMD through inspections of Russian facilities to ensure compliance with treaties limiting WMD, as well as supporting inspections of U.S. facilities by foreign inspectors. Second, DTRA works to reduce the WMD threat by securing and eliminating WMD materials, such as destroying aircraft and missiles, through the CTR program in the former Soviet Union. Third, DTRA supports military commanders by providing technical and analytical support regarding WMD threats on the battlefield and U.S. installations. Finally, DTRA develops technologies to assist in its threat control and reduction efforts and in the support of military operations, such as developing weapons and sensor technologies to destroy or detect WMD and related materials. Figure 3 provides examples of DTRA activities in each of these areas. DTRA implements U.S. responsibilities established under four arms control treaties dealing with WMD and other treaties and agreements. DTRA conducts on-site inspections at other nations’ WMD facilities and supports on-site inspections of U.S. facilities by foreign inspectors. These inspections are carried out in accordance with agreements between the U.S. and other governments. The agency provides inspectors, transportation, and linguists in support of inspection efforts, and also provides visa and passport support for visiting inspection teams. Table 2 shows nine treaties and agreements and DTRA’s role in each. DTRA works to reduce the threat of WMD primarily through its activities with the CTR program, which assists the states of the former Soviet Union to (1) destroy WMD in the former Soviet Union, (2) safely store and transport weapons in connection with their destruction, and (3) reduce the risk of the WMD proliferation. Our previous reviews of the CTR program have found that it has faced two critical challenges: the Russian government has not always paid its agreed-upon share of program costs, and Russian ministries have often denied U.S. officials access to key nuclear and biological sites (see the list of prior GAO reports at the end of this report). In addition to the CTR program, DTRA was recently tasked to secure and destroy any WMD or related materials that might be found in Iraq. The CTR program has removed nuclear weapons from Kazakhstan, Ukraine, and Belarus inherited from the former Soviet Union, and the United States continues to work with Russia and other former Soviet states in WMD elimination programs. According to agency documents, the CTR program had, as of October 31, 2003, overseen the destruction of 520 of 1,473 intercontinental ballistic missiles, 451 of 831 missile silos, 122 of 205 strategic bombers, and 27 of 48 strategic missile submarines that the United States and former Soviet Union agreed to destroy. WMD destruction programs continue with CTR overseeing projects to eliminate missile fuel and launcher equipment. DTRA personnel have also supervised the securing of chemical weapons and are overseeing the construction of a chemical weapons destruction facility at Shchuch’ye, Russia. DTRA also assists with the storing and transporting of WMD materials as part of the CTR program. For example, DTRA is overseeing the construction of a facility that will be used to securely store nuclear materials from weapons at Mayak, Russia. This project, however, has suffered from both a lack of committed Russian funding and access to the site. As a result, the project, once scheduled to begin accepting nuclear materials for storage in 1998, will not begin to do so until 2004. Additionally, DTRA works through the CTR program to enhance the security and safety of biological pathogens located at research centers in the former Soviet Union, such as at Novosibirsk and Obolensk. However, lack of Russian cooperation has affected DTRA’s ability to access other suspected biological facilities, and, after 4 years of effort, DOD has made little progress in addressing security concerns at the 49 biological sites where Russia and the United States have collaborative programs. DTRA works to prevent the spread of WMD through continuing contacts with former Soviet Union military personnel and providing expertise and equipment to the countries of the former Soviet Union to enhance border security. According to agency documents, in fiscal year 2002, the CTR program sponsored 423 contacts with former Soviet Union military personnel in support of various efforts to halt the spread of WMD. In March 2003, DTRA was also assigned the responsibility of destroying any WMD materials found in Iraq. Agency personnel accompanied combat forces into Iraq during Operation Iraqi Freedom. For example, DTRA teams were involved in searching the Tuwaitha Nuclear Research Center to recover, inventory, and safeguard several tons of non-weapons-grade uranium and other radiological materials. DTRA personnel remain in Iraq and continue to support efforts to search for WMD and WMD-related materials. If WMD are found, DTRA personnel would have the responsibility for securing and eliminating them. DTRA provides a wide variety of support to military commanders in their efforts to address WMD threats. DTRA provides liaison officers to assist military commanders in their planning and conduct of military operations. For example, DTRA personnel assisted military commanders during the recent conflicts in Afghanistan and Iraq by providing information on the appropriate weapons to use on suspected WMD storage sites, how to counter the effects of WMD that might be used on coalition forces, and how to secure and dispose of any WMD or WMD-related materials that might be found. DTRA also developed a handbook used by troops in Iraq for how to recognize and handle WMD and WMD-related materials. In addition, these efforts are supported by DTRA’s operations center, which responds to WMD-related requests for expertise, computer modeling of potential events, and support for training exercises. DTRA teams evaluate the security of personnel and facilities worldwide and assess the survivability of specific infrastructure crucial to maintaining command and control of U.S. forces. According to agency documents, DTRA evaluates 80 to 100 DOD installations per year through Joint Staff Integrated Vulnerability Assessments, which are broad in scope and focus on the overall safety and security of personnel. For example, agency teams assess physical security plans, review architectural and structural drawings, and perform analyses of potential blast effects to recommend procedural, structural, or other enhancements to reduce vulnerabilities. These assessments were instituted in the aftermath of (1) the Khobar Towers bombing in 1996 and (2) the publication of a subsequent DOD report in 1997 that determined there were no published standards for securing personnel and facilities. In addition, DTRA conducts Balanced Survivability Assessments to evaluate specific U.S. and allied infrastructure crucial in maintaining command and control of all U.S. forces. These assessments evaluate the ability of power, heating, computer, and communications systems to continue functioning in the event of a WMD attack, accident or natural disaster, technological failure, or sabotage. According to agency officials, DTRA teams conduct an average of 8 Balanced Survivability Assessments per year, but that number rose temporarily to 30 to meet additional requirements. DTRA provides additional support to military commanders through the Defense Nuclear Weapons School and Consequence Management Advisory Teams (CMAT). DTRA operates the Defense Nuclear Weapons School in Albuquerque, New Mexico, to train military and civilian personnel in various aspects of WMD. The school originally focused on training military personnel in the aspects of U.S. nuclear weapons and their effects. The school now includes other areas of the WMD threat, such as addressing the civil and military responses to radiological, chemical, and biological attacks or accidents and preventing the spread of WMD. Additionally, DTRA maintains and deploys teams to deal with the effects of WMD use. The agency has CMATs whose purpose is to mitigate the effects of WMD use or accidents. CMATs also work with military and civilian authorities by conducting training exercises that simulate the effects of WMD use or accidents in the United States and overseas. To assist in WMD threat control activities, DTRA has developed technologies that detect WMD. For example, the agency has been developing sensors to help countries of the former Soviet Union prevent smuggling of WMD or WMD-related materials across borders. DTRA has also developed computer-tracking systems to help member countries comply with the reporting obligations stated in treaties and other agreements. The agency also works to develop ways to protect military equipment and personnel from WMD effects and manages and operates various technology testing facilities, such as facilities that simulate the effects of electromagnetic energy or radiation on military equipment in the event a nuclear weapon is detonated. Additionally, DTRA has also developed software to model nuclear, chemical, and biological attacks or accidents. DTRA does not have its own laboratories. Rather, the agency uses existing institutions, such as the service laboratories (Departments of the Army, Navy, and Air Force), and national laboratories as well as academic institutions. For example, in response to the military requirement for a specialized weapon to bomb caves and tunnels in Afghanistan, DTRA organized a team that employed products and expertise from the Navy, Air Force, Energy, and industry, which allowed DTRA to develop, test, and deploy a weapon that could be used to attack cave and tunnel targets. DTRA has also worked to develop specialized incendiary devices that would destroy WMD material held in a storage facility. To support DTRA’s efforts to address the WMD threat, the agency’s Advanced Systems Concepts Office (ASCO) works to address ways to identify, anticipate, and address technology gaps to improve agency capabilities. For example, ASCO personnel with scientific expertise work to analyze the potential threat to military forces of pathogens such as bubonic plague, E. coli, and Ebola. DTRA also has overseen a project to test the ability of military facilities to protect against and recover from the consequences of chemical and biological attacks. From 2001 to 2003, DTRA and other military personnel undertook a series of exercises, technology demonstrations, and assessments at the U.S. Air Force base at Osan, Korea, to determine different ways to defend military forces and facilities against chemical and biological attacks. As the DOD agency responsible for addressing all aspects of WMD threats, DTRA possesses specialized capabilities and services that can assist civilian entities, including Energy and DHS. DTRA has a formal relationship with Energy’s National Nuclear Security Administration (NNSA) that coordinates and supports legislatively mandated joint DOD- Energy responsibilities for the U.S. nuclear weapons stockpile. DTRA also works with NNSA to secure nuclear materials in Russia. DTRA works with DHS offices on programs related to WMD issues, such as the International Counterproliferation Program and crisis response exercises. DTRA’s interface with DHS is through DOD’s newly established Office of the Assistant Secretary of Defense for Homeland Defense. DTRA’s relationship with DHS may be subject to change as the broader DOD-DHS relationship evolves. In addition to its relations with NNSA and DHS, DTRA also works with and supports other federal agencies, state and local governments, and governments with which the United States has bilateral agreements. DTRA works closely with Energy’s NNSA in matters pertaining to the U.S. nuclear weapons stockpile. This relationship has its roots in the 1946 Atomic Energy Act, which establishes joint DOD and NNSA responsibility for the U.S. nuclear weapons program, including ensuring the safety, security, and control of the U.S. nuclear weapons stockpile. These activities are conducted through the Nuclear Weapons Council (NWC), the senior-level body dedicated to these activities. DTRA plays an active role in all activities of the NWC, from participating as an observer on the NWC to membership on its subordinate bodies. In addition, both DTRA and NNSA are responsible for providing the working staff for the NWC. DTRA also works with NNSA on various nuclear weapons issues associated with the U.S. nuclear weapons stockpile stewardship program, such as nuclear surviviability, nuclear surety, and nuclear weapons effects. According to both DTRA and NNSA officials, coordination between DTRA and NNSA on activities related to these issues takes place at various levels, such as serving on committees and working groups, cooperating on research, and participating on various ad hoc working groups. For example, DTRA and NNSA are currently engaged in a joint study to understand nuclear weapons effects and develop simulation techniques to address survivability of U.S. weapons systems in nuclear environments. DTRA also works with Energy to implement various agreements, research projects, and training and exercises. According to DOD documents, DTRA works with Energy on a variety of agreements related to nuclear weapons, including the Plutonium Production Reactor Agreement, the Plutonium Disposition Agreement, and the Threshold Test Ban Treaty. In addition, DTRA works with Energy laboratories on joint research projects, working groups, and field tests. For example, DTRA is currently working with the laboratories on the development of DOD’s unconventional nuclear warfare defense program, which is developing tools for detecting an unconventionally delivered nuclear or radiological weapon. DTRA and Energy work on programs to secure nuclear warheads in Russia, but, as we reported in March 2003, these efforts face several coordination issues, such as deciding which agency will secure sites identified in both of their plans and coordinating the type of equipment used and guard force training. DTRA worked and continues to work with several government entities that are now part of DHS. For example, DTRA works with the U.S. Customs Service on the congressionally mandated International Counterproliferation Program, which is designed to prevent the illicit movement of WMD material, technology, and expertise. As the executive agent, DTRA implements this program in cooperation with the U.S. Customs Service and the Federal Bureau of Investigation. DTRA works with these two agencies to develop courses and training exercises that provide training and equipment to customs, border guards, and law enforcement personnel in 25 countries of the former Soviet Union, the Baltic region, and Eastern Europe. DTRA also works with DHS on joint exercises and interagency working groups. For example, DTRA, DHS, and Energy recently sponsored and participated in a joint atmospheric dispersion study in Oklahoma City. According to documentation, the study conducted a series of experiments to evaluate current outdoor atmospheric dispersion models and to advance the knowledge of the dispersion of contaminants in urban environments and building interiors. In addition, DTRA participates with DHS entities in interagency working groups that address issues of homeland security and preparedness. According to DTRA officials, the agency is working to share information and experiences with DHS for homeland security applications. For example, DTRA has shared with DHS information regarding its experience on demonstrations conducted as part of the unconventional nuclear warfare defense program. In addition, DTRA has also shared with DHS the WMD crisis decision guides that it developed for DOD. These guides provide response plans for various WMD scenarios. According to DTRA officials, DHS used the response plans for WMD scenarios that are outlined in these crisis decision guides to develop its own WMD response plans. The Office of the Assistant Secretary of Defense for Homeland Defense, within the Office of the Secretary of Defense, was recently established as the focal point for DOD’s interaction with DHS and the interagency community for homeland security issues. This newly established office is responsible for ensuring internal coordination of DOD policy direction and for coordinating activities with DHS. Therefore, the coordination of all new activities, programs, and assistance related to the threat of WMD that involve DTRA and DHS is the responsibility of this office. DTRA’s relationship with DHS is subject to the broader DOD-DHS relationship and therefore may change. The new relationship between DOD and DHS itself is still evolving because the roles and responsibilities of the two departments are still under development. DTRA has provided various capabilities and services, such as vulnerability assessments and first-responder training programs to civilian government entities. DTRA’s capabilities for conducting vulnerability assessments are used to perform vulnerability assessments of civilian facilities and personnel. After the events of September 11, 2001, DTRA was called upon to complete vulnerability assessments of several federal buildings, such as the U.S. Capitol Building and U.S. Supreme Court, as well as vulnerability assessments of commercial U.S. ports. DTRA shares its capabilities and expertise by providing training programs to civilian entities. For example, the agency provides training to the National Guard for performing vulnerability assessment of infrastructure. DTRA also provides WMD and first-responder awareness training to state and local government entities. In addition, DTRA provides informational support—ranging from modeling to subject matter expertise—to civilian government entities and bilateral partners through the services of its operations center. For example, the operations center modeled the potential spread of contamination resulting from a chemical spill of a derailed train by using the agency’s software for chemical weapon attack models. Finally, DTRA’s expertise is also shared with governments with which the United States has bilateral agreements. For example, according to senior DTRA officials, the WMD handbooks developed by DTRA were provided to allied forces supporting U.S. efforts in Iraq, and DTRA has conducted vulnerability assessments for allies. Finally, DTRA is also involved in interagency programs that address issues related to WMD threats. For example, DTRA supports the integration of the DOD Technical Support Working Group that conducts a national interagency response and development program for combating terrorism. Participants in this program include DOD, Energy, State, the Federal Bureau of Investigation, and the Federal Aviation Administration. DTRA uses a strategic planning process, guided by the principles of GPRA, to prioritize its resources and assess its progress. It has developed strategic plans identifying long-term goals and short-term objectives by which it measures progress in meeting its goals. These objectives are affected by funding that comes from several appropriations, some of which must be spent on specific activities, such as the funding for the CTR program. Both the Joint Chiefs of Staff and the Office of the Secretary of Defense assess DTRA every 2 years. In 2002, DTRA completed its first internal self-assessment, which it intends to do annually. We found that the performance report resulting from the self-assessment summarized the agency’s accomplishments and activities but did not assess its progress against established annual performance goals. DTRA has incorporated GPRA principles in its planning process. Under GPRA, agencies should prepare 5-year strategic plans that set the general direction for their efforts. These plans should include comprehensive mission statements, general and outcome-related goals, descriptions of how those goals will be achieved, identification of external factors that could affect progress, and a description of how performance will be evaluated. Agencies should then prepare annual performance plans that establish connections between the long-term goals in the strategic plans with the day-to-day activities of program managers and staff. These plans should include measurable goals and objectives to be achieved by a program activity, descriptions of the resources needed to meet these goals, and a description of the methods used to verify and validate measured values. Finally, GPRA requires that the agency report annually on the extent to which it is meeting its goals and the actions needed to achieve or modify those goals that were not met. DTRA’s current strategic plan, issued in 2003, contains most of the elements in a strategic plan developed using GPRA standards. This plan lays out the agency’s five goals, which serve as the basis of its individual units’ annual performance plans: (1) deter the use and reduce the impact of WMD, (2) reduce the present threat, (3) prepare for future threats, (4) conduct the right programs in the best manner, and (5) develop people and enable them to succeed. These long-term goals are further broken down into four or five objectives, each with 6 to 17 measurable tasks under each objective. These tasks have projected completion dates and identify the DTRA unit responsible for the specific task. For example, under the goal “deter the use and reduce the impact of WMD” is the objective “support the nuclear force.” A measurable task under this objective is to work with Energy to develop support plans for potential resumption of underground nuclear weapons effects testing. The technology development unit in DTRA is expected to complete this task by the 4th quarter of fiscal year 2004. The strategic plan does not discuss external factors that could affect goal achievement, but it does have a discussion of how performance will be measured externally, by other DOD components, and internally through an annual performance report. Each unit within DTRA develops its own annual performance plan that identifies the activities to be completed each year with available funding. These plans do not use the same format, but they all include goals, performance measures by which to measure achievement of those goals, and a link to the strategic plan to show how they support the long-term goals of the agency. DTRA’s leadership discusses each unit’s plan to validate the prioritization of resources and establish the unit’s priorities. DTRA’s annual performance plan consists of these units’ plans and detailed budget annexes. DOD guidance now requires DTRA to submit a consolidated annual performance plan to the DOD comptroller to facilitate DOD’s GPRA reporting. DTRA is in the process of making the unit plans more consistent for fiscal year 2004. Most of DTRA’s funding is appropriated only for specific programs over which it has various levels of control. First, it administers the funding for CBDP. Second, it receives money that Congress provides solely for the CTR program that DTRA is in charge of managing with congressional direction. Third, it receives funding that it can spend according to its own priorities, while meeting certain mission requirements, such as treaty implementation work. Fourth, it receives reimbursements from other federal entities for some activities, such as vulnerability assessments conducted for non-DOD agencies. Figure 4 shows the funding profile for DTRA in fiscal year 2004. Chemical Biological Defense Program (CBDP) Funds to DTRA from Department of Defense Cooperative Threat Reduction Program (fenced off funding) As shown in figure 4, DTRA’s administration of CBDP includes funds that it uses, distributes, and manages. DTRA uses a portion of the CBDP funds for large-scale technology demonstration projects, such as a project that focused on restoring operations at bases attacked by chemical or biological agents. The agency distributes a large portion of the CBDP funds to others for various purposes, such as procuring chemical suits for the military forces. Recently, in April 2003, DTRA was given the responsibility for managing the CBDP’s Science and Technology projects, which are conducted by various laboratories and research institutes throughout the country. DTRA undergoes two DOD reviews—the Biennial Defense Review commissioned by the Office of the Secretary of Defense and the Combat Support Agency Review conducted for the Chairman of the Joint Chiefs of Staff. These reviews focus on how well DTRA meets its customers’ requirements as a combat support agency. Overall, these two reviews have concluded that DTRA supports the requirements of the operating military forces and provides useful products and services. The most recent biennial review was issued December 2002. DTRA was assessed on its combat support, technology development, and threat reduction and control efforts. DTRA’s efforts at threat reduction and control received high satisfaction ratings from the customers surveyed. The agency received acceptable satisfaction ratings in combat support but had below average ratings in the area of technology development. In 2001, the Combat Support Agency Review Team conducted an assessment of DTRA’s responsiveness and readiness to support operating forces in the event of war or threat to national security. The Chairman of the Joint Chiefs of Staff is required by law to conduct assessments of all combat support agencies every 2 years. The review team went to the commands supported by DTRA and conducted extensive interviewing and fieldwork regarding the support provided by DTRA. In the 2001 assessment, DTRA was commended for significant improvements in customer orientation and combat support focus. DTRA was found to be ready to support the requirements of the operating forces. A major finding in the assessment concerned DTRA’s ongoing work on decontamination standards for airbases and strategic air and sealift assets. The study acknowledged that DTRA was supporting the development of these standards, but, as DOD’s center of WMD expertise, it needs to provide commanders with the best possible information currently available, rather than wait until all studies have been completed. A Combat Support Agency Review Team official stated that DTRA has addressed the findings of the 2001 assessment, and that the 2003 assessment was delayed by operations in Iraq but should be released in early 2004. As part of the GPRA process, DTRA produced its first annual performance assessment in 2002. GPRA requires that agencies report on the extent to which they are meeting their annual performance goals and the actions needed to achieve or modify the goals that have not been met. DTRA’s performance report did not compare the agency’s achievements to its goals, discuss the areas where DTRA fell short of its goals, or discuss DTRA’s plans to address goals that it did not achieve. For example, in the threat control area, the agency discussed the number of missions conducted and the equipment provided under the International Counterproliferation Program without stating the program’s goals. In the threat reduction area, the report discussed the number of weapons systems eliminated in the former Soviet Union and other achievements, such as implementing security measures over chemical stockpiles at two sites, again, without discussing the goals of the program. In the area of combat support, the report discussed the number of vulnerability and survivability assessments, training exercises of all types, and number of training courses provided, but does not discuss how many of each were planned. Finally, in the technology development area, the report discussed several technologies developed or under development but does not discuss the agency’s plans for the year. See figure 5 for a comparison of what is expected in an annual performance report and what DTRA’s report contained. Although this information is not in DTRA’s performance report, we found that DTRA leadership meets quarterly to assess progress in meeting each unit’s goals and discuss activities that are not on track. Further, DTRA leadership discusses what needs to be done to get on track and whether goals are unrealistic or not within its control. For example, according to agency officials, they have in the past transferred funding from CTR programs that were having problems into successful CTR programs to prevent those funds from being lost because congressionally provided funds must be spent within a certain time frame. When DTRA was established in 1998, it modeled its strategic planning process on GPRA to prioritize resources and assess progress toward its organizational goals. Although DTRA officials do measure progress against these goals in quarterly reviews, the agency’s performance report does not capture the findings from these reviews. The performance report does not compare accomplishments and activities with established goals and objectives, nor does it explain what actions are needed to achieve or modify goals that are not met. Providing this information would allow decision makers outside of DTRA to have better information regarding DTRA’s performance. We recommend that the Director of DTRA improve the agency’s annual performance report by comparing the agency’s actual performance against planned goals and, where appropriate, explaining why the goals were not met and the agency’s plan for addressing these unmet goals in the future. DTRA provided written comments on a draft of this report, which are reproduced in appendix I. In these comments, DTRA concurred with our recommendation to improve DTRA’s annual performance report by including a comparison of the agency’s actual performance against planned goals and, where appropriate, explain why goals were not met, and the agency’s plan for addressing these unmet goals in the future. DTRA stated that it is refining its performance report methodology to better address the linkage of reported performance to planned goals and future efforts. DTRA also separately provided technical comments that we discussed with relevant officials and included in the text of the report where appropriate. To report on DTRA’s mission and the efforts it undertakes to fulfill that mission, we reviewed agency documentation. Specifically, we reviewed historical documents, including documentation of interviews of the DOD senior officials responsible for the creation of DTRA, and other agency mission documentation. We relied on our prior work that reviewed specific DTRA projects. In addition, we interviewed DTRA officials, including the agency’s Director, senior leadership from each of DTRA’s units responsible for the agency’s mission, other DTRA staff, and DTRA contractor personnel. Finally, we attended a 3-day DTRA liaison officer training class to learn how DTRA trains its liaison officers about the variety of capabilities and services it can offer to military forces in the field. We did not assess the effectiveness of DTRA’s programs. To discuss DTRA’s relationship with other government entities, we reviewed the agency’s documentation of programs and activities that it undertakes with other government entities. We reviewed documents provided by DTRA and NNSA staff regarding NWC responsibilities. In addition, we interviewed DTRA, DOD, Energy, and NNSA officials about DTRA’s coordination with Energy and NNSA. We relied on documentation and discussions with DOD officials regarding the nature of DTRA’s relationship with DHS. We also relied upon our previous audits reviewing DHS and DOD to ascertain the nature of the relationship. To determine how DTRA prioritizes its resources to meet its mission objectives, we reviewed DTRA’s 2000, 2001, and 2003 strategic plans. We reviewed supporting documentation, including budget documents, program and project plans, and internal and external assessments of DTRA. Specifically, we compared DTRA’s strategic plan, each unit’s annual performance plans for fiscal years 2002 and 2003, and documentation on the units’ ongoing assessments of their activities with what we have reported should be found in GPRA-based documents. We met with DTRA officials to discuss the agency’s planning and review process and with officials from the Office of the Secretary of Defense to discuss their assessments of DTRA. We also relied on related prior GAO reports. We performed our review from April 2003 to December 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested congressional committees, the Secretary of Defense, and the Director of the Defense Threat Reduction Agency. We will also make copies available to others upon request. In addition, this report will be available at no cost on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8979 if you or your staff have any questions about this report. Key contributors to this report were F. James Shafer, Hynek Kalkus, Monica Brym, Tim Wilson, Etana Finkler, Lynn Cothern, Martin de Alteriis, and Ernie Jackson. Cooperative Threat Reduction Program Annual Report. GAO-03-1008R. Washington, D.C.: July 18, 2003. Cooperative Threat Reduction Program Annual Report. GAO-03-627R. Washington, D.C.: April 8, 2003. Weapons of Mass Destruction: Additional Russian Cooperation Needed to Facilitate U.S. Efforts to Improve Security at Russian Sites. GAO-03-482. Washington, D.C.: March 24, 2003. Weapons of Mass Destruction: Observations on U.S. Threat Reduction and Nonproliferation Programs in Russia. GAO-03-526T. Washington, D.C.: March 5, 2003. Cooperative Threat Reduction Program Annual Report. GAO-03-341R. Washington, D.C.: December 2, 2002. Nuclear Nonproliferation: U.S. Efforts to Help Other Countries Combat Nuclear Smuggling Need Strengthened Coordination and Planning. GAO- 02-426. Washington, D.C.: May 16, 2002. Cooperative Threat Reduction: DOD Has Adequate Oversight of Assistance, but Procedural Limitations Remain. GAO-01-694. Washington, D.C.: June 19, 2001. Biological Weapons: Effort to Reduce Former Soviet Threat Offers Benefits, Poses New Risks. GAO/NSIAD-00-138. Washington, D.C.: April 28, 2000. Weapons of Mass Destruction: Some U.S. Assistance to Redirect Russian Scientists Taxed by Russia. GAO/NSIAD-00-154R. Washington, D.C.: April 28, 2000. Cooperative Threat Reduction: DOD's 1997-98 Reports on Accounting for Assistance Were Late and Incomplete. GAO/NSIAD-00-40. Washington, D.C.: March 15, 2000. Weapons of Mass Destruction: U.S. Efforts to Reduce the Threats from the Former Soviet Union. GAO/T-NSIAD/RCED-00-119. Washington, D.C.: March 6, 2000. Weapons of Mass Destruction: Effort to Reduce Russian Arsenals May Cost More, Achieve Less Than Planned. GAO/NSIAD-99-76. Washington, D.C.: April 13, 1999. Cooperative Threat Reduction: Review of DOD's June 1997 Report on Assistance Provided. GAO/NSIAD-97-218. Washington, D.C.: September 5, 1997. Cooperative Threat Reduction: Status of Defense Conversion Efforts in the Former Soviet Union. GAO/NSIAD-97-101. Washington, D.C.: April 11, 1997. Weapons of Mass Destruction: DOD Reporting on Cooperative Threat Assistance Has Improved. GAO/NSIAD-97-84. Washington, D.C.: February 27, 1997. Weapons of Mass Destruction: Status of the Cooperative Threat Reduction Program. GAO/NSIAD-96-222. Washington, D.C.: September 27, 1996. Nuclear Nonproliferation: U.S. Efforts to Help Newly Independent States Improve Their Nuclear Material Controls. GAO/T-NSIAD/RCED-96-118. Washington, D.C.: March 13, 1996. Nuclear Nonproliferation: Status of U.S Efforts to Improve Nuclear Material Controls in Newly Independent States. GAO/NSIAD/RCED-96-89. Washington, D.C.: March 8, 1996. Weapons of Mass Destruction: DOD Reporting on Cooperative Threat Reduction Assistance Can Be Improved. GAO/NSIAD-95-191. Washington, D.C.: September 29, 1995. Weapons of Mass Destruction: Reducing the Threat from the Former Soviet Union-An Update. GAO/NSIAD-95-165. Washington, D.C.: June 17, 1995. Weapons of Mass Destruction: Reducing the Threat from the Former Soviet Union. GAO/NSIAD-95-7. Washington, D.C.: October 6, 1994. Soviet Nuclear Weapons: U.S. Efforts to Help Former Soviet Republics Secure and Destroy Weapons. GAO/T-NSIAD-93-5. Washington, D.C.: March 9, 1993. Soviet Nuclear Weapons: Priorities and Costs Associated with U.S. Dismantlement Assistance. GAO/NSIAD-93-154. Washington, D.C.: March 8, 1993. Russian Nuclear Weapons: U.S. Implementation of the Soviet Nuclear Threat Reduction Act of 1991. GAO/T-NSIAD-92-47. Washington, D.C.: July 27, 1992. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e- mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Defense Threat Reduction Agency (DTRA), within the Department of Defense (DOD), plays a key role in addressing the threats posed by weapons of mass destruction (WMD). Since the September 11, 2001, attacks, the visibility of DTRA's role has increased as federal agencies and military commanders have looked to the agency for additional support and advice. GAO was asked to report on DTRA's (1) mission and the efforts it undertakes to fulfill this mission; (2) relationship with other government entities, specifically the Department of Energy and the Department of Homeland Security (DHS); and (3) process that it uses to prioritize resources and assess progress toward organizational goals. Since its establishment in 1998, DTRA has worked to address the threat of WMD. DTRA addresses WMD threats through four core functions: threat control, threat reduction, combat support, and technology development. The agency supports the implementation of arms control treaties by conducting inspections in other countries and by supporting inspections of U.S. facilities, reduces the threat of WMD by eliminating and securing weapons and materials in the former Soviet Union, supports military commanders by providing technical and analytical support regarding WMD, and develops technologies that support efforts to address the WMD threat. DTRA also uses its specialized capabilities and services in various ways to support other government efforts to address WMD threats. DTRA has a formal relationship with Energy to maintain the U.S. nuclear weapons stockpile. DTRA's relationship with DHS is subject to the broader DOD-DHS relationship and may change as the relationship between DOD and DHS evolves. The agency uses a strategic planning process modeled on the Government Performance and Results Act of 1993 (GPRA) to prioritize its resources and assess progress toward its organizational goals. DTRA's planning process identifies long-term goals, establishes short-term objectives by which to measure progress in meeting goals, and collects data to assess progress. DTRA's planning process is influenced by funding, most of which is appropriated for specific programs. GAO found that the performance report resulting from its internal review summarized DTRA's accomplishments and activities but did not compare them with established goals and objectives nor explain the actions needed to achieve or modify these unmet goals as called for under GPRA.
Mr. Chairman and Members of the Subcommittee: I am pleased to be here today to discuss our work on federal advisory committees as the Subcommittee explores possible changes to the Federal Advisory Committee Act (FACA) and the advisory committee process. Last November we presented to the Subcommittee an overview of advisory committees since 1993. We have issued two reports on FACA since then on issues that you, Mr. Chairman, and Senator John Glenn asked us to examine. The most recent of these reports, which is being released today, gathered the views of federal advisory committee members and federal agencies on specific FACA matters. The other report, which was issued last month, assessed the General Services Administration’s (GSA) efforts in carrying out its oversight responsibilities under FACA. My statement today will focus on these two reports, as you requested. As you are well aware, federal agencies often receive advice from advisory committees, and this advice covers a range of topics and issues, including national policy and scientific matters. In fiscal year 1997, federal agencies could turn to 963 advisory committees for advice. Most of these committees were discretionary; that is, they were created by agencies acting under their own authority or were authorized—but not mandated—by Congress. The rest were mandated by Congress or the President. Congress has long recognized the importance of federal agencies receiving advice from knowledgeable individuals outside of the federal bureaucracy. Nevertheless, Congress enacted FACA in 1972 out of concern that federal advisory committees were proliferating without adequate review, oversight, or accountability. FACA provisions are intended to ensure that (1) valid needs exist for establishing and continuing advisory committees, (2) the committees are properly managed and their proceedings are as open to the public as is feasible, and (3) Congress is regularly informed of the committees’ activities. responsible for all matters relating to advisory committees. GSA has developed guidelines to assist agencies in implementing FACA; has provided training to agency officials; and was instrumental in creating, and has collaborated with, the Interagency Committee on Federal Advisory Committee Management. Although FACA was enacted to temper the growth in advisory committees, the number of advisory committees grew steadily from fiscal year 1988 until fiscal year 1993, when the number totaled 1,305. In February 1993, the President issued Executive Order 12838, which directed agencies to reduce the number of discretionary advisory committees by at least one-third by the end of fiscal year 1993. Under authority provided by the executive order, the Office of Management and Budget (OMB) established ceilings for each agency on its maximum allowable number of discretionary committees. Subsequently, the number of advisory committees declined from 1,305 in 1993 to 963 in fiscal year 1997, the most recent fiscal year for which complete data are available. Although the number of advisory committees has decreased, the average number of members per committee and the average cost per committee have increased. On average, between fiscal years 1988 and 1997, the number of members per advisory committee increased from about 21 to 38, and the cost per advisory committee increased from $90,816 to $184,868. In constant 1988 dollars, the average cost per advisory committee increased from $90,816 to $140,870 over the same period. A total of 36,586 individuals served as members of the 963 committees in fiscal year 1997. According to data published by GSA, the cost to operate the 963 committees last fiscal year was about $178 million. To gather the views of advisory committee members on committee operations for our report being released today, we surveyed a statistically representative sample of advisory committee members. The questionnaire responses we received from 607 members are generalizable to the approximately 28,500 committee members for whom we had names and addresses. We also sent a questionnaire to 19 federal agencies to obtain their views on FACA requirements, and all 19 completed the questionnaire. These 19 agencies account for about 90 percent of the federal advisory committees. reviewed committee charters and justification letters, annual reports for advisory committees, and other pertinent documents; applicable laws and regulations; and GSA’s guidance to federal agencies. We also interviewed Committee Management Secretariat officials at GSA and committee management officers at nine agencies. The information from these two reports led us to three general observations. 1. Advisory committees appear to be adhering to the requirements of FACA and Executive Order 12838. These requirements do not appear to be overly burdensome to agencies. 2. Concerns surfaced about certain advisory committee requirements that the Subcommittee may wish to explore in its consideration of FACA. 3. GSA has fallen short of fulfilling its FACA oversight responsibilities. In response to our June 1998 report, GSA said it will take immediate action to improve its oversight. I will turn now to each of these observations in more detail. In examining the responses of advisory committee members to our questionnaire, we determined the overall response to each question and, in addition, separately reported the responses of peer review panel members and general advisory committee members where appropriate. The answers the committee members gave to our survey showed that, generally, they believed that their advisory committees were providing balanced and independent advice and recommendations. The committee members also reported that they believed their committees had a clear and worthwhile purpose and that the committees’ advice and recommendations were consistent with that purpose and considered by the agencies. These responses are shown graphically in the following two figures, which group together by topic a number of the specific questions that we asked committee members. FACA sets out requirements for agencies and advisory committees to follow, and we asked the 19 agencies about their perceptions of how useful or burdensome those requirements were. With regard to the requirements in general, figure 3 shows the range of agencies’ responses. The largest number of agencies considered the requirements to be useful. agencies whether FACA had prohibited them from receiving or soliciting input on issues or concerns from public groups (other than from advisory committees). Most of the agencies—16 of the 19—answered no. There has been some question about whether the possibility of litigation over compliance with FACA requirements has inhibited agencies from forming new advisory committees. The most frequent response—received from 14 of the 19 agencies—was that this possibility did not inhibit the formation of new committees. As I noted earlier, Executive Order 12838 established ceilings for each agency on its maximum allowable number of discretionary advisory committees. A majority of the agencies (12) said that the ceilings did not deter them from seeking to establish new advisory committees. Seven agencies, however, said the ceilings did deter them. An agency could request approval from OMB to establish a committee that would place it over its ceiling. Two of the seven agencies had done so during fiscal years 1995-1997, and OMB approved their requests. Although committee members and agencies responding to our questionnaires generally provided a more positive than negative image of FACA, their responses also pointed to concerns and issues that the Subcommittee may wish to explore in its consideration of FACA. We list these concerns in no particular order of priority. About 13 percent of the general advisory committee members said that agency officials had asked their advisory committees on occasion to give advice or make recommendations on the basis of inadequate data or analysis. A majority of the 19 agencies reported that two FACA requirements—preparing an annual report on closed advisory committee meetings and filing advisory committee reports with the Library of Congress—required little labor on their part but offered little value, at least in the agencies’ estimation. Seven agencies offered suggestions for changing the FACA requirements, including two that suggested that rechartering be required every 5 years instead of the current 2 year cycle. Under FACA, peer review panels are treated as advisory committees, and six agencies indicated that they used peer review panels. Five of these agencies said that panels should be exempt from some, most, or all FACA requirements. Agencies identified 26 congressionally mandated committees that they believed should be terminated. GSA regulations allow agencies to determine whether members of the public may speak at advisory committee meetings. (Members of the public are allowed to submit their remarks in writing.) All 19 agencies allowed members of the public to speak before at least some advisory committees. However, agencies placed restrictions on the public’s ability to speak at committee meetings (e.g., only if time permitted), and the restrictions varied from agency to agency. Advisory committees may also have subcommittees. Meetings of subcommittees may be exempt from FACA requirements, and agencies reported that about 27 percent of the meetings subcommittees held during fiscal year 1997 were not covered by FACA. For these meetings, the subcommittees may voluntarily follow FACA requirements. However, the extent to which the requirements are followed appears to vary. For example, of the eight agencies that responded, only two said Federal Register notices were given for all or most subcommittee meetings. Five said a designated federal officer attended all or most subcommittee meetings. Although 16 agencies said FACA had not prohibited them from soliciting or receiving input from the public, 3 agencies said it had prohibited them. One agency said that it had to limit its prior practice of forming working groups or task forces to address specific local projects or programs. Another agency said that FACA had made it more cumbersome to seek citizen input because of the staff time required to complete FACA paperwork. And, the third agency said that solicitation of a consensus opinion from a task force or working group could lead to that task force or working group being considered subject to FACA. Finally, there appears to be some concern among agencies about the possibility of being sued for noncompliance with FACA if they obtain input from parties who are outside of the agency and its advisory committees. Although 10 agencies said the possibility of such litigation has inhibited them to little or no extent from obtaining outside input independent of FACA, 8 agencies said that it has inhibited them to some, a moderate, or very great extent. The Director of GSA’s Committee Management Secretariat said that the responses from committee members and agencies had suggested areas that should be examined further, several of which GSA already had been examining and others that GSA plans to examine. Although the GSA Committee Management Secretariat does not have authority to stop the formation or continuation of an advisory committee, FACA and GSA regulations assign it certain responsibilities for overseeing the federal advisory committee program. These responsibilities include ensuring that advisory committees are established with complete charters conducting a comprehensive review annually to independently assess whether each advisory committee should be continued, merged, or terminated; submitting information to the President in time to meet the statutory due date for the President’s annual report to Congress on advisory committees; and ensuring that agencies provide Congress with follow-up reports on recommendations made by presidential advisory committees. We concluded in our June report that the Secretariat had not carried out each of these four responsibilities. For example, even though all charters and justification letters had been reviewed by the Secretariat, 36 percent of the 203 charters and 38 percent of the 107 letters from October 1996 through July 1997 that we reviewed were missing one or more items required by FACA or GSA regulations. When reviewing the advisory committees’ annual reports for fiscal year 1996, the Secretariat did not independently assess whether committees should be continued, merged, or terminated. For 8 of the last 10 annual presidential reports on advisory committees, GSA submitted its report to the President after the President’s report was due to Congress. The Secretariat did not ensure that agencies prepared for Congress the 13 follow-up reports required on recommendations made by presidential advisory committees in fiscal years 1995 and 1996, and in fact none had been prepared. Based on our findings, we recommended that the GSA Administrator direct the Committee Management Secretariat to fully carry out the responsibilities assigned to it by FACA in a timely and accurate manner. In response to that recommendation, the GSA Administrator said the Associate Administrator for Govermentwide Policy will ensure that the Committee Management Secretariat takes immediate and appropriate action to implement our recommendation. there appear to be areas in which those requirements warrant a fresh look. In addition, there is room for GSA’s Committee Management Secretariat to improve its fulfillment of its FACA oversight responsibilities. GSA says that it is acting on both fronts. Still, the Subcommittee may wish to explore the concerns surfaced in our reports as it considers ways to improve FACA. Mr. Chairman, this concludes my statement. I will be pleased to answer any questions you or other Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the: (1) views of federal advisory committee members and federal agencies on specific Federal Advisory Committee Act (FACA) matters; and (2) General Services Administration's (GSA) efforts in carrying out its oversight responsibilities under FACA. GAO noted that: (1) advisory committees appear to be adhering to the requirements of FACA and Executive Order 12838, which led to the establishment of ceilings for each agency on the number of discretionary advisory committees; (2) these requirements do not appear to be overly burdensome to agencies; (3) although the responses of committee members and agencies portrayed a more positive than negative image of FACA, their responses did raise concerns and issues that the House Committee on Government Reform and Oversight, Subcommittee on Government Management, Information, and Technology may wish to explore in its consideration of FACA; (4) there appears to be some concern among agencies about the possibility of being sued for noncompliance with FACA if they obtain input from parties who are outside of the agency and its advisory committees; (5) GSA's Committee Management Secretariat has fallen short of fulfilling its FACA oversight responsibilities; (6) further, GSA did not ensure that the advisory committees were established with complete charters and justification letters; (7) 36 percent of the 203 advisory committee charters and 38 percent of the 107 justification letters from October 1996 through July 1997 that GAO reviewed were missing one or more items required by FACA or GSA regulations; and (8) GSA said that it will take immediate action to improve its oversight.
For the purpose of the SBLF program, the Small Business Jobs Act of 2010 defines qualified small business lending—as defined in an institution’s quarterly regulatory filings (Call Reports)—as one of the following: owner-occupied nonfarm, nonresidential real -estate loans; commercial and industrial loans; loans to finance agricultural production and other loans to farmers; and loans secured by farmland. In addition, qualifying small business loans cannot be for more than $10 million, and the business may not have more than $50 million in revenue. The act specifically prohibits Treasury from accepting applications from institutions that are on the Federal Deposit Insurance Corporation’s (FDIC) problem bank list or have been removed from that list during the previous 90 days. The initial baseline small business lending amount for the SBLF program was the average amount of qualified small business lending that was outstanding for the four full quarters ending on June 30, 2010, and the dividend or interest rates paid by an institution are adjusted by comparing future lending against this baseline. Also, the institution is required to list any loans resulting from mergers and acquisitions so that its qualified small business lending baseline is adjusted accordingly. Fewer institutions applied to SBLF than initially anticipated, in part because many banks did not anticipate that demand for small business loans would increase. The institutions that applied to and were funded by SBLF were primarily institutions with total assets of less than $500 million. In addition, in our 2011 report, we reported that the lack of clarity by Treasury in explaining the program’s requirements created confusion among applications and Treasury faced multiple delays in implementing the SBLF program and disbursing SBLF funds by the statutory deadline of September 27, 2011. The amount of funding a bank received under the SBLF program depended on its asset size as of the end of the fourth quarter of calendar year 2009. Specifically, if the qualifying bank had total assets of $1 billion or less, it was eligible for SBLF funding that equaled up to 5 percent of its If the qualifying bank had assets of more than $1 risk-weighted assets.billion but less than $10 billion, it was eligible for funding that equaled up to 3 percent of its risk-weighted assets. The SBLF program provided an option for eligible institutions to refinance preferred stock or subordinated debt issued to the Treasury through the Troubled Asset Relief Program’s (TARP) Capital Purchase Program (CPP). Participating SBLF banks must pay dividends or interest of 5 percent per year initially to Treasury, with reduced rates available if they increase their small business lending. Specifically, the dividend rate payable will decrease as banks increase small business lending over their baselines. While the dividend rate will be no more than 5 percent for the first 2 years, a bank can reduce the rate to 1 percent by generating a 10 percent increase in its lending to small businesses compared with its baseline. After 2 years, the dividend rate on the capital will increase to 7 percent if participating banks have not increased their small business lending. After 4.5 years, the dividend rate on the capital will increase to 9 percent for all banks regardless of a bank’s small business lending. For S-corporations and mutual institutions, the initial interest rate was at most 7.7 percent. The rate would fall as low as 1.5 percent if these institutions increase their small business lending For CDLFs, the initial by 10 percent or more from the previous quarter.dividend rate will be 2 percent for the first 8 years. After 8 years, the rate will increase to 9 percent if the CDLF has not repaid the SBLF funding. This structure is designed to encourage CDLFs to repay the capital investment by the end of the 8-year period. Treasury will allow an SBLF participant to exit the program at any time, with the approval of its regulator, by repaying the funding provided along with dividends owed for that period. Under the act, Treasury has a number of reporting requirements to Congress related to SBLF: (1) monthly reports describing all of the transactions made under the program during the reporting period; (2) a semiannual report (for the periods ending each March and September) providing all projected costs and liabilities and all operating expenses; and (3) a quarterly report known as the Use of Funds Report. SSBCI was established to support existing and new state programs that support private financing to small businesses and small manufacturers that, according to Treasury, are not obtaining the loans or investments they need to expand and to create jobs. The act allowed Treasury to provide SSBCI funding for two state program categories: capital access programs (CAP) and other credit support programs (OCSP). For both CAP and OCSPs, lenders are required to have at least 20 percent of their own capital at risk in each loan. Also, origination and annual utilization fees are determined by each state to defray the program’s cost. Loan terms, such as interest and collateral, are typically negotiated between the lender and the borrower, although in some cases loan terms are subject to state approval and, in many cases, the state and lender will discuss and negotiate loan terms and guarantee options prior to reaching agreement to approve the loan and issue a guarantee. A CAP is a loan portfolio insurance program wherein the borrower and lender, such as a small business owner and a bank, contribute to a reserve fund held by the lender. Under a CAP, when a participating lender originates a loan, the lender and borrower combine to contribute an amount equal to a percentage of the loan to a loan reserve fund, which is held by the lender. Under SSBCI, the contribution must be from 2 percent to 7 percent of the amount borrowed. Typically, the contribution ranges from 3 percent to 4 percent. The state then matches the combined contribution and sends that amount to the lender, which deposits the funds into the lender-held reserve fund. Under SSBCI, approved CAPs are eligible to receive federal contributions to the reserve funds held by each participating financial institution in an amount equal to the total amount of the contributions paid by the borrower and the lender on a loan-by-loan basis. In addition, the following OCSPs are examples of programs eligible to receive funding under the act: Collateral support programs: A Collateral Support Program is designed to enable financing that might otherwise be unavailable due to a collateral shortfall. It provides pledged cash collateral to lenders to enhance the collateral coverage of individual loans. The state and lender negotiate the amount of cash collateral to be pledged by the state. Loan participation programs: States may structure a loan participation program in two ways: (1) through purchase transactions, also known as purchase participation, in which the state purchases a portion of a loan originated by a lender, or (2) by participating in a loan as a co- lender, where a lender originates a senior loan and the state originates a second loan to the same borrower that is usually subordinate to the lender’s senior loan should a default occur. State loan participation programs encourage lending to small businesses because the lender is able to reduce its potential loss by sharing its exposure to loan losses with the state. Loan guarantee programs: These programs enable small businesses to obtain a term loan or line of credit by providing the lender with the necessary security in the form of a partial guarantee. In most cases, a state sets aside funds in a dedicated reserve or account to collateralize the guarantee of a specified percentage of each approved loan. The guarantee percentage is determined by the states and lenders but, under SSBCI, may not exceed 80 percent of loan losses. Venture capital programs: These programs provide investment capital to create and grow start-ups and early-stage businesses, often in one of two forms: (1) a state-run venture capital fund (which may include other private investors) that invests directly in businesses, or (2) a fund of funds, which is a fund that invests in other venture capital funds that in turn invest in individual businesses. Direct loan programs: Although Treasury does not consider these programs to be a separate SSBCI program type, it acknowledges that some states may identify programs that they plan to support with SSBCI funds as direct loan programs. The programs that some states label as direct loan programs are viewed by Treasury as co-lending programs categorized as loan participation programs, which have lending structures that are allowable under the statute. OCSPs approved to receive SSBCI funds are required to target small businesses with an average size of 500 or fewer employees and to target support towards loans with an average principal amount of $5 million or less. In addition, these programs cannot lend to borrowers with more than 750 employees or make any loans in excess of $20 million. After their applications were approved, the states entered into Allocation Agreements with Treasury before they received their funds. SSBCI Allocation Agreements are the primary tool signed by Treasury and each participating state and outline how recipients are to comply with program requirements. The act requires that each state receive its SSBCI funds in three disbursements or tranches of approximately one-third of its approved allocation. Prior to receipt of the second and third disbursements, a state must certify that it has expended, transferred, or obligated 80 percent or more of the previous disbursement. Treasury may terminate any portion of a state’s allocation that Treasury has not yet transferred to the state within 2 years of the date on which its SSBCI Allocation Agreement was signed. Treasury may also reduce, suspend or terminate a state’s allocation at any time during the term of the Allocation Agreement upon an event of default under the agreement. Under the act, states are required to submit quarterly and annual reports on their use of SSBCI funds. All SSBCI Allocation Agreements will expire on March 31, 2017. In response to our previous recommendation on SBLF compliance procedures, Treasury has developed procedures for monitoring SBLF participant compliance with legal and reporting requirements. Treasury has also issued compliance standards for SSBCI and procedures to review states’ annual reports. The standards provide the participating states with best practices for reviewing borrower and lender compliance with SSBCI’s legal and policy requirements. We recommended in December 2011 that Treasury should finalize procedures for monitoring SBLF participants, including procedures to better ensure that Treasury is receiving accurate information on participants’ small business lending. In response to the recommendation, Treasury officials told us they had written compliance procedures in March 2012 and finalized compliance procedures on September 28, 2012, for monitoring participant conformance with program terms, including documentation requirements, certification requirements, and other requirements under the Securities Purchase Agreement. compliance procedures include a review of the Quarterly Supplemental In addition, according to Treasury officials, SBLF Reports (quarterly reports) for accuracy to monitor that the dividend or interest rates paid by the institutions are correct. As mandated by the act, Treasury requires each SBLF participant to submit two annual certifications: (1) Any businesses receiving a loan from an SBLF participant using SBLF funds must certify to the institution that the principals of the business have not been convicted of a sex offense against a minor. Under the Securities Purchase Agreement, annually until redemption, the SBLF participant is required to provide the certifications to Treasury that businesses receiving loans from the bank have certified that their principals have not been convicted of a sex offense against a minor. (2) Each SBLF participant must certify that it is in compliance with the requirements of the Customer Identification Program, which is intended to enable the bank to form a reasonable belief that it knows the true identity of each customer. In addition to these certifications, Treasury requires, through the Securities Purchase Agreement, that SBLF participants meet certain additional conditions and certifications, such as the bank’s Chief Executive Officer and Chief Financial Officer attesting to the accuracy of the bank’s Call Report and certifying to Treasury that information provided on each supplemental quarterly report, is complete and accurate. Treasury developed a compliance monitoring tool for verifying the proper certification submission by SBLF participants. The tool is a set of spreadsheets Treasury uses to track the receipt of documents from SBLF participants, as required by the Securities Purchase Agreement, including annual financial statements, independent auditor certifications, and executive officer certifications. An important SBLF compliance focus is the review and monitoring of the quarterly reports. Each SBLF participant is required to correctly calculate its quarter-end adjusted small business lending baseline and the qualified small business lending for that quarter.primary source on which Treasury bases its Use of Funds Report of qualified small business lending and the dividend or interest rate paid by the SBLF participants. The quarterly reports are forms in which the SBLF participants calculate their qualified small business lending for the quarter and the resulting dividend or interest rate. The dividend or interest payment depends on the growth or the decline of qualified small business lending. Thus, if the baseline or the qualified small business lending is incorrectly calculated, Treasury will not receive an accurate dividend or interest payment amounts. The quarterly reports are the According to Treasury documentation, Treasury will review the following elements in the quarterly reports: analysis of the quarterly reports; and explanation letters and auditor attestations if the quarterly report is a certification of accuracy by the institution’s executives (including Chief Executive Officer, Chief Financial Officer, and all directors or trustees who attested to the Call Report); independent auditor certification; real-time validation of the calculations for the quarterly reports; resubmission. According to Treasury officials, the review performed by SBLF compliance staff is primarily to identify discrepancies between data on the quarterly reports and the Call Reports. According to Treasury staff, they use a system that allows staff to monitor discrepancies or errors and follow up with participants. Treasury staff review participants’ quarterly reports to identify any potential errors or missing information. Staff compare the quarterly report submissions to the Call Reports to check for discrepancies for the same period. According to Treasury officials, staff also compare quarterly reports to prior Call Reports to check for errors in reported changes in loan balances and net charge-offs and apply statistical tests, such as a comparison of government guaranteed lending amounts in the quarterly reports, to lending figures publicly reported by the Small Business Administration.verification check for arithmetic errors for calculating the adjusted baseline exclusions and qualified small business lending. Treasury follows up with institutions to address identified issues and errors and requests resubmission of corrected quarterly reports, as appropriate. Treasury staff said they use a Treasury has also responded to the findings and recommendations of the Treasury’s Office of Inspector General (OIG). In August 2012, Treasury’s OIG reported on a small judgmental sample of 10 initial supplemental To establish initial dividend reports submitted by SBLF participants. rates, SBLF participants completed the initial supplemental reports using small business lending data from their quarterly Call Reports and loans records and submitted them to Treasury. The OIG reviewed the calculations for the small business lending baseline and the initial dividend rate payment and found errors in 8 of the 10 reviewed reports. OIG’s recommendations included the following: follow up with the 8 banks where errors were identified and determine whether corrected initial supplemental reports and quarterly reports should be submitted and make the necessary adjustments to dividend rates for the banks, as appropriate; notify all SBLF participants about the types of errors identified by this audit to help prevent similar errors from occurring in the future; and ensure that the October 2012 Use of Funds Report contains corrections for errors identified by this audit. Treasury agreed with the OIG’s recommendations and commented that it would review the identified errors with each institution and direct these institutions to resolve any errors in the third quarter of 2012, including resubmitting corrected initial and quarterly supplemental reports, as appropriate. Further, Treasury conducted training webinars in July and August 2012 to address common errors identified in their reviews of quarterly report submissions. According to Treasury officials, they completed the review of the eight banks where quarterly report errors were identified and banks resubmitted quarterly reports as appropriate. Two banks submitted revised reports identifying a combined total of $258.00 in overpayments to Treasury. Treasury has developed SSBCI Policy Guidelines and compliance standards for participating states to follow in implementing their state small business programs using SSBCI funds. According to Treasury officials, primary oversight of the use of SSBCI funds is the responsibility of each participating state. The participating states we interviewed viewed their responsibility as monitoring SSBCI lender and borrower compliance with program requirements. Under the act, specific lender and borrower assurances and certifications must be delivered before a transaction is enrolled in the participating state’s approved program. For example, borrowers must provide assurance that proceeds will be used for an eligible business purpose and that the borrower is not an executive officer, director, or principal shareholder (or a member or the immediate family or a related interest of such individual) of the lender. Similarly, lenders must submit certifications to the participating state providing assurance that, for example, the loan is not a refinancing of a loan previously made to that borrower by the lender or an affiliate of the lender. In addition to these certifications, the act requires that borrowers and the lenders certify that their principals have not been convicted of a sex offense against a minor as such terms are defined in section 111 of the Sex Offender Registration and Notification Act. Eight states we interviewed told us that they reviewed borrower and lender certifications for meeting the legal requirements and assurances before enrolling the loans. In May 2012 Treasury issued the SSBCI National Standards for Compliance and Oversight, which was intended to provide the states with guidance for reviewing, monitoring, and managing compliance.considers the standards as best practices that the states should adopt or incorporate, as appropriate, into existing procedures. For example, according to the standards, if a participating state delegates to an administrative entity the responsibility to obtain the certifications to Treasury individual lenders, the participating state must exercise oversight to ensure compliance. One means of ensuring oversight would be for the participating state to conduct an annual audit of each lender’s transaction files to verify that the use of proceeds certifications are on file and signed by an authorized representative of the lender. As another example of a best practice, the standards recommend that, when overseeing entities that administer the state small business programs, states should perform site visits, require periodic status update reports, or conduct regular conference calls with the administering entity. The participating states we interviewed found the SSBCI National Standards for Compliance and Oversight to be helpful as they were developing their compliance procedures. Three of the nine states already had similar compliance procedures in place for their small business lending and amended their procedures to include SSBCI compliance standards. Six states told us that they established or are establishing compliance standards using the SSBCI National Standards for Compliance and Oversight as guidance. According to state officials of the nine states we interviewed, as part of their procedures, staff reviewed the borrower and lender documentation for compliance. Under the act, SSBCI participants are subject to two reporting requirements: annual reports and quarterly reports. As part of its responsibilities for overseeing the use of SSBCI funds, Treasury is planning to conduct a review of the Annual Report data submitted to them by the states. Under the act, SSBCI participants are to submit to Treasury an Annual Report no later than March 31 of each year. The data included breakdowns by industry type, loan size, annual sales, and number of transaction-level data for each loan or investment made using SSBCI funds for that year; the number of borrowers that received new loans originated under the approved state program; the total amount of such new loans; other data that the Secretary may require to carry out the purposes of employees of the borrowers that received such new loans; the zip code of each borrower that received such a new loan; and the program. As part of its review of the 2012 Annual Report data, Treasury plans to review a sample of loans and investments for the appropriate documentation of borrower and lender assurances and certifications for data accuracy. To conduct this review, SSBCI staff designed an evaluation form to review the certifications and the Annual Report data. SSBCI participants are required to submit their 2012 annual data to Treasury by March 31, 2013. The loans or investment will be reviewed for the following assurances and certifications: Each lender or investor that has received credit support for a particular transaction has at least 20 percent of their own capital at risk unless Treasury has waived this requirement. Signed borrower and lender use-of-proceeds certifications have been provided, and the borrower/lender signature block matches the borrower on the loan documents. Signed borrower and lender sex offender certifications have been provided. In the data accuracy review, Treasury plans to verify a sample of SSBCI Annual Report data submitted by the states with the actual loan or investment documentation. The types of data that Treasury intends to verify include the following: date of disbursement for the loan or investment; borrower’s annual revenue and the year of business incorporation; enrolled loan amount and any public subsidy associated with the enrolled loan or venture capital investment; SSBCI federal contribution to CAP loan; and amount the state had contributed to a loan participation, loan guarantee, or loan collateral program. Treasury also intends to verify that the amount of subsequent private financing matches the documentation provided and that the documentation supports the relationship between the SSBCI loan program and the private financing. The states are required to submit to Treasury a quarterly report on the use of SSBCI funds during the previous quarter. Under the act, states are required to report the total amount of federal funding used and to certify that the information provided is accurate and that the state is implementing its approved programs in accordance with the act and the regulations or other guidance issued by Treasury. As part of the Allocation Agreements, Treasury also requires states to submit reports on the total amount of allocated funds used for administrative costs, the amount of program income generated, and the amount of charge-offs against the federal contributions to the reserve funds. Treasury conducts a more limited review of the SSBCI quarterly reports compared to the Annual Report. Specifically, Treasury staff conduct checks on the administrative costs to ensure that the costs do not exceed the statutory caps. In addition, staff verify that the amount of funds used does not exceed the amount allocated to the state and that the state official signing the SSBCI quarterly reports is authorized to do so. According to Treasury officials, they would not approve a new disbursement of funds if they had substantial evidence that a state’s compliance with SSBCI program requirements was inadequate. When a participating state requests a disbursement of funds, according to Treasury staff, they will conduct a pre-disbursement review. In addition to confirming that the participating state has expended, obligated, or transferred 80 percent of its previous disbursement, Treasury staff review the results of Treasury’s SSBCI compliance monitoring. According to Treasury documentation, this review will include a review of a sample of transactions in which SSBCI funds were used; a review of financial audits, if submitted; the review of the quarterly reports and if available, the annual reports for accuracy and completeness; and the review of any of the states’ compliance activities or records that would indicate whether a participating state had failed to comply with any program requirements. As of June 30, 2012, SBLF participants had increased their business lending over the baseline from 2010. For SSBCI, Treasury had transferred to the states nearly one-third of the program’s $1.5 billion in total funding as of June 30, 2012. States had used about $154 million (about 10 percent) of these funds through a variety of programs. States had received and used funds at differing levels, but some states were concerned that Treasury may take actions to suspend disbursements after participants have been in the program for more than 2 years. Treasury has the authority to terminate disbursements to SSBCI participants who have not met the requirements to receive their full allocation within 2 years of having been accepted into the program. Treasury has not yet developed a policy that reflects how it will use this authority even though this 2-year period will end for most states sometime in 2013. Treasury officials stated that they do not plan to use this authority at this time and that Treasury will provide all participants with sufficient lead time so that they can modify or adjust their programs, as necessary. According to Treasury, SBLF participants have increased their qualified small business lending by $6.7 billion over their $36.0 billion baseline, as of June 30, 2012. This number includes a $1.5 billion increase over the prior quarter. Further, Treasury reported that 89 percent of participants had increased their qualified small business lending over baseline levels and about 76 percent of participants had increased their qualified small business lending by 10 percent or more. As previously discussed, SBLF uses a dividend or interest rate incentive structure to encourage participating institutions to increase qualified small business lending. SBLF participants paid an average dividend or interest rate of 2.1 percent on their SBLF funds as of June 30, 2012. Over half of SBLF participants paid a dividend or interest rate of 1 percent on their SBLF funds— because their qualified small business lending growth was 10 percent or higher—and 15 percent of institutions paid 5 percent or more (see fig. 1). SBLF participants also showed increases in small business loans under $1 million, as well as total business lending. While the Small Business Jobs Act set the threshold for qualified small business lending at $10 million, depository institutions are required to submit Call Reports with detailed financial information including small business lending, which the reports define as loans under $1 million. Such data are useful for comparing certain small business lending of SBLF participants with that of institutions that did not participate in SBLF.which includes all business loans, including loans over $10 million and Total business lending— those to businesses with over $50 million in revenue—can also help illustrate differences in lending activity between these two groups. Treasury uses total business lending in its reporting to compare SBLF participants to non-SBLF institutions and noted that qualified small business lending makes up a large part of total business lending for SBLF participants. For example, qualified small business lending totaled 95 percent of total business lending for the median SBLF participant as of December 31, 2011. SBLF participants increased both small business loans under $1 million as well as total business lending. In particular, the median SBLF participant had a 31 percent increase in total business lending for the quarter ending June 30, 2012, over the baseline level. The median SBLF participant had a 14 percent increase for small business loans under $1 million over the same period. When categorizing SBLF participants by the changes in their lending, the SBLF participants fell into the higher growth categories for total business lending, but were more evenly distributed for small business loans under $1 million except for participants whose lending increased over 40 percent (see fig. 2). About half of SBLF participants used their program funds to repay and exit TARP’s CPP. These CPP refinance participants had noticeably lower lending growth than SBLF participants that did not participate in CPP (see fig. 3). In particular, CPP refinance participants increased small business loans under $1 million by 5 percent compared with 33 percent for non- CPP participants. For total business lending, CPP refinance participants saw increases of 17 percent compared with 45 percent for non-CPP participants. Treasury officials said that one possible reason for this difference is that CPP refinance participants were only eligible for a limited amount of incremental SBLF funds, beyond the amount of CPP funds refinanced. As a result, unlike other SBLF participants, these institutions did not receive as much “new” capital to increase small business lending. Nevertheless, all SBLF participants are subject to the same incentive structure based on the dividend or interest rate. Furthermore, Treasury officials also noted that in many instances the CPP refinance participants may have already experienced an increase in lending from the CPP capital they originally received. As of June 30, 2012, Treasury had transferred $468 million in SSBCI funding to the states, representing about one-third of the $1.5 billion that was set aside for the program. States had used $150 million of these funds—about 10 percent of the program total—disbursing them to lending institutions through a variety of programs. Loan participation programs accounted for 47 percent of the funds used, as of June 30, 2012, followed by venture capital programs (28 percent), collateral support programs (17 percent), and loan guarantee programs (6 percent), as shown in figure 4. The remaining program categories—capital access programs, direct lending, and other—combined for the remaining 2 percent of funds used. Participating states have received and used SSBCI funds at differing levels, partially because of when applications were approved and funds were allocated (see fig. 5). Of the 53 states, territories, or municipalities that received SSBCI funding, 47 had used a proportion of their funds as of June 30, 2012. Montana had the highest proportion used of the amount that Treasury had allocated, as of June 30, 2012. States we interviewed said that disbursing funds was much faster for state programs that were in existence before SSBCI because the infrastructure was already in place and lenders were already familiar with the programs. Moreover, some states implementing new programs told us that it could take time to use the funds because they had to conduct extensive outreach to lenders to make them aware of the programs and encourage them to commit to small business lending. Under the act, the Secretary may revoke any portion of a participating state’s allocated amount that has not been transferred to the state by the end of the 2-year period beginning on the date the state received approval, but Treasury has not developed a written policy on how it will use this authority. For most of the participating states, this 2-year period will end sometime during 2013, but it is still unknown if they all will be able to use their funds in time to obtain the third and final disbursement within this time frame. This time frame is quickly approaching for five states (California, Hawaii, Missouri, North Carolina, and Vermont) that signed their Allocation Agreements with Treasury before May 2011. For 39 states the 2-year time frame will end by September 30, 2013, in terms of their allocation agreement. As of November 16, 2012, according to Treasury, ten states (Idaho, Indiana, Kansas, Michigan, Missouri, Montana, North Carolina, South Carolina, South Dakota, and Washington) had requested and received their second disbursement; eight states (Arkansas, Delaware, Florida, Louisiana, Massachusetts, New Hampshire, New Jersey, and West Virginia) had requested their second disbursement but had not yet received it; and one state, Montana, had requested and received a partial third disbursement. The remaining 38 SSBCI participants were still working to use their first disbursement, as of November 16, 2012. Some states told us that the 2-year time frame is short for disbursing SSBCI funds especially for states with new state small business programs. One state official told us that because their programs are relatively new and lending institutions are unfamiliar with them, the 2-year time frame is too tight for lenders to make informed decisions about participating in the program. Similarly, officials from two states told us that the 2-year time frame for disbursing the SSBCI funds is short because their state small business programs were newly created. According to Treasury officials, Treasury is aware of the 2-year time frame and the potential concerns of the states. After reviewing the law, Treasury officials told us that the Secretary has discretion on whether or not to revoke the undisbursed allocation if it has not been transferred to a participating state as of the 2-year anniversary. According to Treasury officials, they have not drafted a policy or procedures on what actions they may implement if the states miss the 2-year time frame for their final disbursement of funds. However, they told us that the states were encouraged to describe in their applications how they would disburse the funds within the 2-year time frame and that they advised the states of the importance of meeting the 2-year time frame. Moreover, they said that they do not consider the 2-year time frame to be a requirement that funds not yet transferred must be deemed unavailable at that time. At an October 2012 conference attended by many SSBCI participants, according to Treasury staff, the Deputy Assistant Secretary for Small Business, Community Development, and Affordable Housing Policy told the participants that Treasury did not currently plan to exercise this authority in the near future. However, these statements are not currently documented in a written or formal policy statement explaining its position. Treasury staff told us when Treasury develops a policy on its discretionary authority, it will provide all participants with sufficient lead time so that they can modify or adjust their programs, as necessary. Treasury officials told us that the purpose of the Deputy Assistant Secretary’s conference announcement was to address the concern and clarify that Treasury would not be taking action at this time if an SSBCI participant had not met the 2-year requirement and to affirm that Treasury retains its discretionary authority going forward. In prior work, we have recommended that when states are required to spend federal funds to meet a statutory deadline or specific program requirements, agencies should provide guidance to the states on what they should expect if they are unable to meet the deadline. The act provides Treasury’s discretionary authority to encourage the states to use the funds in a timely manner, but without a formal written policy, how Treasury would use this authority in a consistent manner is unclear. Having clear guidelines on how Treasury plans to use its discretionary authority to terminate funds could help ensure consistent application of the authority. In addition, such guidelines could help states understand the need to use the funds in a timely manner while meeting program requirements and could provide clarity to states about the associated consequences of not meeting the 2-year time frame. Treasury has established performance measures to manage its programs but could enhance its public reporting of program performance information. In its Use of Funds Report, Treasury compared business lending by SBLF participants to that of non-SBLF institutions, but the report does not disclose Treasury’s rationale for choosing its comparison group over other possibly more representative alternatives. Treasury officials told us that they are continuing to consider different approaches for evaluating SBLF. In addition, Treasury has designed SSBCI timeliness and outcome performance measures but has not made this information publicly available. Treasury officials are considering different options for presenting this information and said they plan to eventually to make some of it public. However, Treasury has not made any decisions on the specific SSBCI performance information that it might publicly release. Treasury has also taken actions to enhance its communications with SBLF and SSBCI program participants, such as dedicating staff to assist with participants’ inquiries. Our review found that SBLF participants had noticeably higher changes in lending rates when compared to similar non-SBLF institutions, but that Treasury’s methods for analyzing SBLF participants’ lending may somewhat overstate differences between SBLF participants’ lending and that of other eligible banks. In our December 2011 report on SBLF, we recommended that Treasury finalize plans for assessing the performance of the SBLF program, including measures that can isolate the impact of SBLF from other factors that affect small business lending. Treasury officials explained to us that they explored different comparison methods that more closely mirror SBLF participants, but this information is not disclosed in its Use of Funds Report to Congress. In its Use of Funds Report, Treasury compared total business lending by SBLF participants to that of a comparison group of non-SBLF institutions and found that SBLF participants had noticeably higher increases in total business lending. In its analysis, Treasury adjusted the comparison group for a number of factors, including an institution’s asset size and geography, thereby excluding institutions that fell outside the asset size range of SBLF participants and that were headquartered in states that did not have an institution participating in SBLF.helpful step in understanding the possible effects of SBLF funding. However, Treasury did not adjust its comparison group to better ensure that its distribution among various asset sizes and states mirrored that of SBLF participants. Moreover, Treasury did not adjust its comparison group to account for differences in financial health despite requiring SBLF applicants to demonstrate a certain degree of financial health before approving them for funding. For example, the act specifically restricted Treasury from accepting applications from institutions that were on or recently removed from the FDIC problem bank list. Because the comparison group did not exclude such institutions that were unable to qualify for SBLF funding, these institutions may have downwardly skewed the group’s small business lending growth rate, thus causing Treasury’s results to overstate the implied effect of the program. As a result, Treasury’s analysis seemingly links SBLF funding to the increase in small business lending when that increase, to some extent, may have been associated with the factors mentioned above or other factors such as improved local economic growth. We used the Texas Ratio as a proxy for financial health. It is defined as nonperforming assets plus loans 90 or more days past due divided by tangible equity and reserves. The Texas Ratio helps determine a bank’s likelihood of failure by comparing its troubled loans to its capital. Because SBLF funding increases the equity portion of the ratio, we used Texas Ratios as of March 31, 2011, which was the last quarter preceding the initial disbursements of SBLF funding. growth. That is, growth rates of SBLF participants remained noticeably higher than those of our peer group. This growth could indicate a beneficial effect of SBLF funding on lending, or it could be due to other factors, including differences between SBLF participants and our peer group for which we were not able to adjust. When categorizing institutions by the level of change in their business lending, SBLF participants were more heavily concentrated in the higher growth categories compared with Moreover, the median the peer and comparison groups (see fig. 6).SBLF participant had a 31 percent increase in total business lending, compared with a 2 percent increase for the comparison group and a 6 percent increase for the peer group. Further, SBLF participants had a higher median growth rate of total business lending than both our peer group and Treasury’s comparison group in all six geographical regions (see fig. 7). Moreover, the peer group had higher rates of growth than the comparison group in five of the six regions. SBLF participants also had a higher median growth rate of total business lending across all five asset size categories (see fig. 8). Again, the peer group’s growth rate was slightly closer to that of SBLF participants than the comparison group was for all five asset groups, yet it remained well below it. Moreover, SBLF participants in the larger asset categories had lower growth rates in total business lending. However, the peer and comparison groups had no noticeable trend across different asset size groups. In addition, the peer and comparison groups were closest to SBLF participants among institutions with assets over $1 billion. Treasury officials said that in determining the comparison group to use in their analysis, they analyzed distributional differences in asset size and geography between the groups, as well as some indicators of financial health. They judged that the differences in the variables they analyzed were modest and believed that adjusting for these differences—that is, making the comparison group more representative of SBLF participants— would only provide a limited benefit while making the analysis less transparent and more difficult for others to replicate. They were also concerned that using what they considered to be a more judgmental approach, such as selecting a peer group, would require certain arbitrary decisions which might raise concerns about the validity of their selection criteria. As a result, Treasury determined that the differences found in their analyses did not warrant an approach that would adjust for these factors. In addition, although Treasury officials told us they considered but decided against using a comparison group that would have been adjusted to more closely mirror SBLF participants; they did not explain this decision in the methodology section of the Use of Funds Report. In prior work on another Treasury program, we said that Treasury should enhance its communications relating to financial assistance so that they are transparent to the Congress and the public. Without disclosing its rationale for choosing its comparison group over other possibly more representative alternatives, Treasury may not be providing policymakers with a full understanding of its approach and may not be transparent regarding the potential for its analysis to overstate the effects of SBLF. Treasury’s comparison group analysis in its Use of Funds Report also does not isolate the impact of SBLF relative to other factors affecting small business lending to the extent that other approaches would. While a comparison group is an important step and provides useful context, a more rigorous analysis of peer banks to help assess what might have happened without SBLF, as our 2011 report on SBLF recommends, may help Treasury better understand the effects of the program. Our prior work on program evaluation suggests that a carefully constructed control group should be as similar to program participants as possible to help identify the impact of a program, and a number of statistical methods can help account for differences. making arbitrary judgments in the selection of peers could be addressed by conducting a sensitivity analysis—a best practice also identified by the Office of Management and Budget—which involves varying assumptions to determine how sensitive results are to changes in those assumptions. See GAO, Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effective Interventions, GAO-10-30 (Washington, D.C.: Nov. 23, 2009) and Designing Evaluations: 2012 Revision, GAO-12-208G (Washington, D.C.: Jan. 2012). approving applications for small business loans or credit lines; the demand for small business loans; the participant’s practices regarding approvals of loans and lines of credit for small business; use of SBLF funding or the type of actions the institution has taken because of SBLF funding; and outreach activities to minority, women, and veteran communities. Treasury also leveraged the Federal Reserve’s Senior Loan Officer Opinion Survey on Bank Lending Practices as it developed questions for the survey and is exploring how it may analyze results from both surveys to assess SBLF. Responses were due from the SBLF participants by October 4, 2012. Treasury plans to issue the results in a report at a later date. In our December 2011 SSBCI report, we recommended that Treasury develop and finalize SSBCI-specific performance measures for evaluating the effectiveness of the program and when developing these measures consider key attributes of successful performance measures. In response to the recommendation, Treasury developed measures for both the timeliness of program administration and program performance. In establishing measures on timeliness, Treasury considered its own role in administering the program, which includes evaluating the eligibility of the participating states and approving state programs; overseeing compliance with the provisions of the act, the SSBCI policy guidelines, and the terms and conditions of the Allocation Agreement; and providing ongoing technical assistance for each state’s and municipality’s program implementation. According to Treasury, the timeliness measures will assess the quality of the direction provided by Treasury to the states, including the efficiency of Treasury’s administration of program resources and program oversight. These goals for these measures are 90 percent of requests for modifications to Allocation Agreements are approved or rejected within 90 days of receiving a final submission, 90 percent of requests for subsequent disbursements under existing Allocation Agreements are approved or rejected within 90 days of receipt of a formal submission, and 90 percent of quarterly reports received within 5 days of the deadline. According to Treasury staff, for the first two goals, the measurement period starts once Treasury has received all documentation required by the established procedures for each underlying activity from the state requesting a modification or disbursement. Treasury staff advised us that these measures are tracked continuously and that Treasury reports the 12-month data to the Office of Management and Budget annually as part of SSBCI’s annual budget submission, which should be publicly available. In addition, Treasury has developed measures for evaluating performance for SSBCI: amount of SSBCI funds used over time, as reported on SSBCI quarterly reports; volume and dollar amount of loans or investments supported by SSBCI funds, as reported on SSBCI annual report; amount, in dollars, of private-sector leverage in SSBCI annual reports; estimated number of jobs created or retained in SSBCI annual reports. Although Treasury has established measures for SSBCI performance, Treasury is considering how it will use these program performance indicators for evaluating the overall progress of SSBCI. Treasury staff recognized that performance indicators can help policymakers understand the results of the policy, but they emphasized that they do not have a full year of SSBCI data to use in evaluating the program. Many states did not receive their first SSBCI allocation until late 2011 and thus, Treasury had limited data to evaluate SSBCI. For example, Treasury told us that only 23 states reported using SSBCI funds to support small business loans or investments as of December 31, 2011. Treasury officials told us that after they have received the 2012 annual report data in early 2013, which would constitute a full year of SSBCI funds for almost all participants, they will be able to decide how they will review and analyze the performance measures going forward. In addition, Treasury explained that SSBCI’s performance cannot be evaluated using a single number or performance indicator because SSBCI consists of 140 different programs, and most states have multiple small business programs. For instance, Treasury has not created a specific number of estimated jobs as a target because so many factors can determine the use of funds—for example, the degree of interest by financial institutions and private investors, the performance of the state agency and any contractors that operate the approved program, and the effectiveness of the program features designed by the state. How this program activity affects the level of employment in a state introduces many variables that can be difficult to predict. According to Treasury officials, specific numeric indicators, such as the number of loans resulting from state business programs, may or may not be indicative of the performance of SSBCI. In analyzing performance outcomes for SSBCI, Treasury staff advised us that outcomes are highly dependent on factors outside of the program’s control, such as the demand for credit in a given locality and the quality of the small business borrowers’ requests for such funds. Also, the states have different economies that may affect the results of the SSBCI funds. For example, Michigan’s SSBCI funds are more concentrated in manufacturing, while other states may be more focused on providing assistance to small technological firms. In contrast to SBLF, the act does not require Treasury or the states and municipalities to report to Congress or the public on the status of SSBCI. Rather, the act requires that SSBCI participants include certain data, such as the number and the dollar amounts of the loans resulting from SSBCI funds, in annual reports to Treasury. Treasury’s performance measures will rely on the data from these annual reports. Treasury officials told us that they are considering making public some of the SSBCI performance data, but have not decided what specific SSBCI information will be released publicly or how it will be presented because they want to make sure the information reflects the outcomes in an appropriate context. As noted earlier, SSBCI covers a large number of programs across the country and other factors, such as local demand for credit, could lead to different performance outcomes across the participating states. Officials told us they plan to decide after they receive and review the 2012 annual reports. The GPRA Modernization Act (GPRAMA) requires agency performance information to be publicly available. In reporting on the governmentwide implementation of GPRAMA in 2011, we noted that agencies need to consider the differing needs of various stakeholders, including Congress, to ensure that performance information will be both useful and used. We reported that federal officials must understand how the performance information they gather can be used to provide insight into the factors that impede or contribute to program successes; to assess the effect of the program; or to help explain the relationships between program inputs, activities, outputs, and outcomes. Information on SSBCI’s performance measures regarding the amount of small business loans or investments and the amount of private leveraging resulting from SSBCI funds would provide Congress and SSBCI participants with useful information on the progress of SSBCI and its effectiveness in increasing small business lending. For example, two states told us that they would like more information on the performance measures of the other states’ programs in order to better implement their own programs. Making the 2012 performance outcome data publicly available may assist the participating states in identifying successful small business state programs and the level of private leveraging that the states have achieved at this point in the SSBCI program. SSBCI applications were required to demonstrate a reasonable expectation that the programs would achieve a 10:1 ratio of new small business lending to SSBCI funds within specified timeframes. Information on the progress of SSBCI programs may help participating states to make necessary adjustments to their programs to more efficiently and effectively use their entire allocation of SSBCI funds. Treasury has taken steps to address our December 2011 recommendation that it apply lessons learned from the SBLF application review process in order to improve how it communicates with program participants and other stakeholders, such as the bank regulators and Congress. In response to the recommendation, Treasury officials told us that they have enhanced their communication strategy with SBLF participants and stakeholders and that they are better positioned to respond to questions about SBLF. Shortly after the application review and approval period ended, Treasury assigned points of contact for each of the SBLF participants. Each point of contact was responsible for responding to inquiries from a designated group of participants and generally helping to ensure that the participants understood the compliance and reporting requirements. As the volume of inquiries has declined, Treasury shifted to a more centralized approach for handling inquiries. For example, all inquiries from SBLF participants are submitted to a centralized e-mail system, and they are then assigned to the staff responsible for (1) compliance, (2) investment management, and (3) operations. Compliance staff address questions about the Securities Purchase Agreements and the quarterly reports, and the reporting of qualified small business lending and the investment rates paid by SBLF participants. Investment management responds to inquiries relating to acquisitions and mergers and operations handle questions about redemption of SBLF shares and dividend payments. In addition, Treasury has assigned a staff member to handle external communications with Congress, the media, and the general public, including the reporting of qualified small business and the investment rates paid by SBLF participants. According to Treasury officials, they also communicate with industry and trade associations. Other communication methods established by Treasury included a webinar for instructing SBLF participants on completing the quarterly reports. Treasury staff told us that the purpose of the webinars was to reduce the number of errors in the quarterly reports. In addition, on September 28, 2012, Treasury finalized written procedures to provide guidelines for answering inquiries to provide for consistency, continuity, and validity in communications with SBLF participants and their representatives. The guidelines describe the process by which a contact manager or staff member will communicate with SBLF participants. The process steps include the tracking and handling of incoming inquiries, outgoing mass communications, periodic reviews by business lines for potential Frequently Asked Questions, and the control manager’s reviews of control effectiveness. The procedures outline the communication roles and responsibilities of SBLF employees, the contact manager, and management. SSBCI has also developed communication mechanisms to assist states in developing and implementing their state small business programs. Treasury has assigned three relationship managers whose role is to work with an assigned group of states in successfully allocating the funds to lenders and subsequently to borrowers. Moreover, Treasury has assigned a consultant for three states that requested additional technical expertise in implementing their small business programs. Additionally, according to Treasury officials, Treasury has engaged a consultant to assist in educating lenders nationwide about the approved state programs and two consultants to assist with expertise in state-run venture capital to support SSBCI staff in providing technical assistance to state program managers In addition to the relationship managers and consultants, Treasury has held two conferences for communicating with SSBCI participants. Under the act, Treasury is generally required to disseminate best practices to the states, and Treasury staff view the conferences as one method of doing so. The SSBCI National Standards for Compliance and Oversight are another example of disseminating best practices. According to Treasury staff, conferences provide state officials with the opportunity to discuss their programs with peers that are running similar programs and can potentially make modifications to their applications. During the March 2012 conference, states received information on the different types of small business programs, lenders, and Treasury assistance. The conference agenda showed that several panels were held. Generally, the panels consisted of state officials, who discussed their small business programs, such as the Loan Participation Program and the Venture Capital Program. In addition, four banks participated in the panels. Training sessions were held during the conference on the National Compliance Standards, on requests for modifications to the Allocation Agreements, and on subsequent disbursement requests of SSBCI funds. Officials from two states we interviewed told us that they found the March 2012 conference helpful. For example, one official stated that she found the conference assisted her in answering questions on compliance and on SSBCI small business programs. Treasury held a similar conference in early October 2012. SBLF and SSBCI officials have made progress in developing procedures to monitor participants’ compliance. In response to our previous recommendation on SBLF monitoring, Treasury has developed procedures for monitoring SBLF participant compliance with legal and reporting requirements. Treasury also issued the standards for compliance to provide states with best practices for reviewing participants’ compliance with SSBCI’s legal and policy requirements and developed procedures for sampling transaction-level data to evaluate the accuracy of the states’ annual reports. Most SSBCI participants have only received the first of three disbursements of their full allocation approved by Treasury, and some participants were concerned that they may have difficulty using the funds in time to meet the requirements to get their third and final allocation within 2 years. SSBCI participants lack a clear understanding of what actions Treasury plans to take if they do not meet the 2-year time frame. Although a Treasury official has publicly indicated that Treasury does not currently plan to exercise the authority to terminate funds that have not been allocated within 2 years from the states’ approval date, it retains the authority to do so in the future. Treasury has yet to develop a formal written policy or guidance explaining its position. Clear and specific guidelines on how Treasury plans to use this authority to terminate funds will help ensure Treasury is consistent in how it applies this authority and may further encourage participants to develop programs and approaches to use the funds in a timely manner. Moreover, such a policy could also facilitate the ongoing communication between Treasury and the participants on how best to allocate and use the funds. Treasury has taken some steps to evaluate the performance of SBLF and the extent to which SBLF participants are increasing their small business lending, but further refinements could provide a better assessment of the effectiveness of SBLF. As we found in our December 2011 SBLF report, Treasury has yet to finalize plans for assessing the performance of the program, including measures that can isolate the impact of SBLF from other factors that affect small business lending. As we found in Treasury’s analysis as well as our own, SBLF participants appear to be increasing their small business lending since entering the program. However, as we recommended in our 2011 report, many factors can contribute to such increases, and Treasury should assess these trends taking other factors into account. While Treasury compared SBLF participants to non-SBLF institutions and reported this analysis in its Use of Funds Report, it did not provide important information on why it selected the comparison group that it used rather than using a peer group more closely matched to the SBLF participants. Our own analysis using a peer group showed that SBLF participants had increased their lending compared to peers, but also showed that the difference in small business lending growth was somewhat smaller than what Treasury’s analysis suggests. The lack of explanation for Treasury’s approach in the Use of Funds Report could create confusion about the rigorousness of the comparison. Furthermore, a more transparent description of the methodological decisions would help to enhance the transparency of the information reported. In addition, as we recommended in the 2011 report, Treasury should include in its plans for assessing the program a more robust evaluation that controls for factors that affect small business lending, such as improved local economic growth. Without such an evaluation, policymakers, including Congress, may not have the information they need to assess whether the SBLF approach of using capital injections is a desirable policy option for increasing small business lending. Furthermore, a more transparent description of the methodological decisions would help to enhance the transparency of the information reported. In addition, as we recommended last year, Treasury has created performance indicators to help monitor and measure the effectiveness of SSBCI. However, Treasury has not yet determined how and when it will make this information public. Treasury officials acknowledged the importance of this information for policymakers and have said they hope to develop a method for sharing this information publicly after they have had time to review the second annual reports that will be completed by the states next year. While we recognize that it is still early in the program and results vary greatly across the program participants for a variety of reasons, performance information is an important tool for policymakers, particularly as Congress reviews and considers programs to assist small businesses going forward. In addition, making this information public in a timely manner may help program participants, who could observe how their peers are performing and use this information to help them improve their own programs. We recommend that the Secretary of the Treasury take the following three actions: To help ensure that Treasury is transparent and accountable in its decision making, Treasury should develop a written policy explaining how it will use the Secretary’s discretionary authority to terminate the availability of allocated funds to SSBCI participating states if funds have not been transferred to the participant by the end of the 2-year period beginning on the date that the Secretary approved the state for participation. To enhance the transparency of its reporting on SBLF, Treasury should expand its methodology discussion in its Use of Funds Report to include the rationale for its methodology and alternative methodologies it considered. To provide Congress and the participating states with information on the progress of SSBCI, Treasury should make information publicly available on its performance indicators measuring SSBCI’s performance. We provided a draft of this report to Treasury for review and comment. The Deputy Assistant Secretary for Small Business, Community Development, and Affordable Housing Policy provided written comments, which are reprinted in appendix II. Treasury also provided technical comments on the draft report, which we incorporated as appropriate. In the written comments, Treasury agreed with the three recommendations and stated that it has begun to take steps to implement each of them. Specifically, Treasury said it has begun to develop a written policy for exercising its discretion to terminate any portion of a state’s allocation not yet transferred to the state after two years. Treasury said it also will include the rationale for Treasury’s methodology along with alternative methodologies that were considered in the methodology section of the next Use of Funds Report and that work is underway on publishing performance indicators that measure SSBCI outcomes. Treasury noted that the report reflected the progress SBLF and SSBCI had made in setting up compliance procedures and taking steps to improve communication with program participants. Treasury also stated that both programs are working as intended and that it expects both programs to continue to promote lending to small businesses. We are sending copies of this report to the appropriate congressional committees and Treasury. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Daniel Garcia-Diaz at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to examine: (1) the status of the U.S. Department of the Treasury’s (Treasury) efforts to monitor participants’ compliance with program requirements under the Small Business Lending Fund (SBLF) and the State Small Business Credit Initiative (SSBCI); (2) the status of SBLF and SSBCI participants’ small business lending; and (3) the extent to which Treasury evaluates and communicates SBLF and SSBCI program outcomes. To examine the status of Treasury’s efforts to monitor participants’ compliance with program requirements under SBLF and SSBCI, we analyzed Treasury’s documentation. For SBLF, we reviewed and analyzed SBLF’s Participant Compliance Monitoring Procedures, which were issued on September 28, 2012. We interviewed Treasury officials on their compliance program and the process by which staff review the Quarterly Supplemental Reports for their accuracy. For SSBCI, we reviewed SSBCI National Standards for Compliance and Oversight and SSBCI Policy Guidelines. We reviewed the Allocation Agreements between Treasury and nine participating states that we interviewed to analyze the conditions and the requirements placed on the states. We interviewed Treasury officials on implementing the SSBCI compliance standards and officials from the states of Colorado, Florida, Georgia, Illinois, Massachusetts, Michigan, New Jersey, Oregon, and Texas. We judgmentally selected these nine states based on the following criteria: (1) the top 25 states awarded the most SSBCI funds; (2) geographical diversity; (3) states with at least two small business programs; (4) states that began using funds as of March 31, 2012, and states that had not yet used funds for any loans or investments as of March 31, 2012; and (5) avoiding states which have been reviewed previously by GAO or the Treasury’s Office of the Inspector General. Because a large number of states had not spent their first allocation as of December 31, 2011, we used both the 2011 Annual Report and the Quarterly Report for March 31, 2012, to identify states’ progress in allocating their funds. In terms of geographical diversity, we selected at least two states from each of four regions: Midwest, Northeast, South, and West. To determine the status of SBLF, we reviewed the SBLF Use of Funds Reports to determine the most current level of qualified small business lending and the distribution of dividend or interest rates paid by program participants. Because Treasury requires only SBLF participants to submit data on qualified small business lending—generally, lending below $10 million— we also analyzed total business lending as well as small business loans under $1 million, which is available through the Call Reports.private financial database that contains publicly filed regulatory and financial reports—and analyzed lending by SBLF participants for the quarter ending June 30, 2012. The Small Business Jobs Act of 2010 (the act) establishes the baseline for measuring the change in small business lending as the average of the amounts that were reported for each of the four calendar quarters ended June 30, 2010. Call Reports did not begin requiring quarterly reporting of small business loans under $1 million until the second quarter of this four quarter baseline period. Accordingly, we calculated the baseline for small business loans under $1 million using the average of each of the three calendar quarters ended June 30, 2010. The act also defines one of the categories of qualified small business lending as owner-occupied nonfarm, nonresidential real estate loans. For quarterly reports of small business lending, Call Reports use a broader category of all nonfarm, nonresidential real estate without a distinction for owner occupancy. As a result, the small business loans under $1 million include the broader category. The total business lending numbers use the full baseline and the narrower categorization of owner-occupied nonfarm, nonresidential real estate and should therefore not be compared to the We accessed the Call Report data using SNL Financial—a numbers for small business loans under $1 million. We assessed the reliability of these data, for example, by analyzing missing data and performing various logic tests and determined that the data were sufficiently reliable for the purpose of reporting on SBLF lending. To review SSBCI participants’ small business lending, we collected and reviewed data from the Quarterly Report as of June 30, 2012—the most recent quarter available. We conducted data reliability checks on the SSBCI quarterly data for the dollar amounts transferred to the states and the dollar amounts used by each participating state to identify any potential discrepancies in the data. We interviewed Treasury officials on how they assessed these data. In addition, we verified with three states the data that they had sent to Treasury on the SSBCI Quarterly Report as of June 30, 2012. We also interviewed state and Treasury officials about the status of the use of SSBCI funds and Treasury’s authority to suspend disbursements to SSBCI participants. Based on these steps, we determined that the data collected by Treasury for SSBCI were sufficiently reliable for the purpose of reporting total amounts of funds allocated and used by the states. To examine the extent to which Treasury evaluates and communicates SBLF and SSBCI program outcomes, we reviewed Treasury documentation for both programs. For determining the extent to which Treasury evaluates the performance of SBLF, we reviewed the Use of Funds Report to evaluate the methodology Treasury used to assess the performance of SBLF participants against a comparison group of institutions that did not participate in SBLF. We interviewed Treasury officials to understand the process for developing the comparison group as well as the alternatives they considered. We used the methodology in the report to replicate Treasury’s group for our analysis. To help understand the usefulness of the comparison group, we also chose a peer group of non-SBLF institutions that we adjusted for geographical and size distribution as well as financial health, using the Texas Ratio as a proxy.Treasury’s comparison group of 6,175 institutions and categorized them To select the peer group, we started with our replication of into six asset-size groups. We then sorted the institutions by state, asset group, and Texas Ratio and generally assigned two peer institutions to each SBLF participant with the closest Texas Ratios, within the same state and asset group. In some cases, we had to make judgments in choosing the peers—for example, when two SBLF participants were similar to one another and when too few potential peers existed. We determined that any potential judgment factors were mitigated by the fact that the peer group mirrored the SBLF more closely than the comparison group across geographical and size distribution as well as financial health (see table 1). Consistent with the Use of Funds Report, we analyzed the growth in total business lending because qualified small business lending data were not available for non-SBLF institutions and because qualified small business lending totaled 95 percent of total business lending for the median SBLF participant as of December 31, 2011. Here we calculated the baseline using the average of the four quarters ending June 30, 2010. The data limitation mentioned earlier that required us to use only three quarters in the calculation of the baseline only applied to the availability of small business lending data, and the three-quarter baseline was used only in those earlier sections. We compared our peer group with Treasury’s comparison group and compared both to SBLF participants. We also compared Treasury’s analysis against our previous work on program evaluation as well as best practices identified by the Office of Management and Budget. In assessing the SBLF communication process, we reviewed and analyzed SBLF’s Contact Management Procedures and interviewed Treasury officials on how they communicated with SBLF participants. For determining the extent to which Treasury evaluates SSBCI performance outcomes, we collected and reviewed the performance measures that Treasury developed for evaluating SSBCI. We interviewed Treasury officials on how they were planning to use the performance outcome measures in evaluating SSBCI. We also interviewed officials from the same nine states we described earlier—Colorado, Florida, Georgia, Illinois, Massachusetts, Michigan, New Jersey, Oregon, and Texas—to collect information on their evaluation and the performance information they reviewed relating to SSBCI. To analyze the communication of SSBCI performance outcomes, we reviewed the relevant provisions of the Small Business Jobs Act of 2010 and Treasury’s outreach information that they had drafted for the states, such as conference materials. We conducted this performance audit from March 2012 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Kay Kuhlman (Assistant Director), Pamela Davidson, Nancy Eibeck, Chris Forys, Michael Hoffman, Jonathan Kucskar, Marc Molino, Jennifer Schwartz, and Jena Sinkfield made key contributions to this report.
The Small Business Jobs Act of 2010 aimed to stimulate job growth by, among other things, establishing the SBLF and SSBCI programs within Treasury. SBLF uses capital investments to encourage community banks with assets of less than $10 billion to increase their small business lending. SSBCI provides funding to strengthen state and municipal programs that support lending to small businesses. Under the act, GAO is required to conduct an audit of both programs annually. GAO's first reports were on the programs' implementation and made recommendations. This second report examines (1) the status of Treasury's efforts to monitor participants' compliance with program requirements under SBLF and SSBCI, (2) the status of SBLF's and SSBCI's small business lending, and (3) Treasury's evaluation of SBLF and SSBCI and communication of outcomes to Congress and interested parties. GAO reviewed Treasury documents on SBLF and SSBCI procedures; analyzed the most recent available performance information for both programs and data on financial institutions; and interviewed officials from Treasury and nine states participating in SSBCI. The U.S. Department of the Treasury (Treasury) has made progress in developing guidance and procedures to monitor participants' compliance with requirements for the Small Business Lending Fund (SBLF) and the State Small Business Credit Initiative (SSBCI) programs. In response to GAO's previous recommendation on SBLF monitoring, Treasury has developed procedures for monitoring SBLF participant compliance with legal and reporting requirements. Treasury also issued standards to provide states with best practices for reviewing participants' compliance with SSBCI's legal and policy requirements and developed procedures for sampling transaction-level data to evaluate the accuracy of the states' SSBCI annual reports. As of June 30, 2012, SBLF participants had increased their business lending over the 2010 baseline. The median SBLF participant had a 31 percent increase in total business lending and a 14 percent increase for small business loans under $1 million, according to GAO's analysis. For SSBCI, states had used about 10 percent of the funds as of June 30, 2012. The act provides Treasury with authority to terminate funds that have not been allocated to states within 2 years of Treasury's approval of the state's participation in SSBCI. However, Treasury has not yet developed a formal written policy explaining what actions it will take if SSBCI participants have not met the requirements to receive their full allocation of funds within the 2-year time frame. Treasury officials said that they currently have no plans to use this authority but retain the ability to do so in the future. Nevertheless, formal guidelines on how Treasury will use this authority could help ensure consistent use of the authority if used in the future and provide clarity to states about the consequences of not using the funds in a timely manner. Treasury has taken steps to evaluate SBLF's and SSBCI's performance but could enhance public reporting of program outcome information. In a quarterly report to Congress, Treasury compares business lending in SBLF participants to a large comparison group that it adjusted for certain aspects of bank size and geography. GAO's analysis using a peer group that was adjusted for financial health as well geography and size showed that in nearly every case, the difference in total business lending growth was somewhat smaller than in Treasury's analysis. Treasury considered using a more refined peer group that adjusted for these factors but judged that the differences were not significant. However, Treasury did not disclose these options in the report or explain why the larger comparison group was chosen, which compromised the transparency of Treasury's methodology. Furthermore, Treasury's approach did not isolate the impact of SBLF from other factors that could affect lending, as GAO recommended in its first SBLF report. Treasury officials said they are continuing to explore evaluation approaches, including collecting additional data from a survey of SBLF institutions. In response to GAO's 2011 recommendation on SSBCI performance measures, Treasury has designed performance measures, such as the amount of private leverage states have achieved with SSBCI funds. However, Treasury has not yet developed a way to make this performance information public. Treasury shares information with the states through conferences and technical assistance, but performance information could help Congress and the states to better understand the effectiveness of SSBCI's various programs. Treasury should develop a policy on how it will use its authority to terminate SSBCI funds. Treasury should also expand its methodology discussion in SBLF reports and make the results of SSBCI performance measures public.
Most commuter rail agencies use rights-of-way that are owned by Amtrak or freight railroads for at least some portion of their operations. Specifically, 9 commuter rail agencies operate over Amtrak-owned rights- of-way. Twelve commuter rail agencies operate over rights-of-way owned by freight railroads. In addition, most commuter rail agencies rely on Amtrak and freight railroads for some level and type of service, including the operation of commuter trains; maintenance of equipment (i.e., locomotives and train cars); maintenance of way (i.e., track and related infrastructure); and train dispatching. Specifically, 13 commuter rail agencies rely on Amtrak for some type of service. Fourteen commuter rail agencies rely on freight railroads for some type of service. (See figs. 1 and 2 for an overview of these relationships.) Liability and indemnity provisions in agreements between commuter rail agencies and freight railroads differ, but commuter rail agencies generally assume most of the financial risk for commuter operations. For example, most liability and indemnity provisions assign liability to an entity regardless of fault—that is, a commuter rail agency could be responsible for paying for certain claims associated with an accident caused by a freight railroad. The reverse is also true—freight railroads are sometimes responsible for certain claims associated with accidents caused by commuter rail agencies. These types of agreements are referred to as no- fault agreements. In addition, about one-third of these no-fault agreements exclude certain types of conduct, such as gross negligence, recklessness, or willful misconduct, from the agreements. Some of the remaining no- fault agreements specifically allow for such conduct, that is, the commuter rail agency is still responsible for certain claims caused by, for example, the gross negligence or recklessness of a freight railroad. The liability and indemnity provisions also require that commuter rail agencies carry certain levels of insurance to guarantee their ability to pay for the entire allocation of damages. Although liability and indemnity provisions in agreements between commuter rail agencies and freight railroads differ, commuter rail agencies generally assume most of the financial risk for commuter operations. With two exceptions, liability and indemnity agreements between commuter rail agencies and freight railroads are primarily no-fault arrangements—that is, responsibility for specific liability in any incident is assigned to a particular entity, regardless of fault. For example, in a no- fault agreement, a commuter rail agency might indemnify a freight railroad by assuming liability for commuter equipment damage and passenger injury in a derailment, regardless of whether the freight railroad’s maintenance of the tracks could be blamed for a given incident. Similarly, a freight railroad could indemnify a commuter rail agency by assuming liability for freight rail equipment and track maintenance, even if the commuter rail agency was solely responsible for causing an accident. In contrast, a fault-based agreement assigns responsibility for an incident to the party that caused the incident. Of the 33 commuter rail agency and freight railroad agreements we reviewed, 21 were no fault, 10 contained a combination of no-fault and fault provisions, and 2 were premised on a fault-based allocation of risk. Although most of the agreements between commuter rail agencies freight railroads are no-fault arrangements, the liability and indemnity provisions vary regarding the type of conduct allowed. For example, 9 of the 31 agreements with all or some no-fault provisions explicitly exclude certain types of conduct from the no-fault arrangement. conduct is any type of conduct specifically identified in the agreement as conduct beyond simple negligence and can be defined in a number of ways, including willful and wanton misconduct, gross negligence, or conduct that might result in punitive damages. For example, 1 agreem ent specifically excludes conduct that is taken with conscious disregard for or indifference to the property or safety or welfare of others. Another 10 of the 31 agreements with all or some no-fault provisions, in contrast, explicitly include conduct that exceeds simple negligence as covered under the no-fault provisions. For example, 1 agreement explicitly states that the indemnification agreement includes coverage for punitive damages, or damages that are caused by the reckless or willful acts of a party, while another explicitly states that the parties agree to indemnify each other even if a train engineer in an incident is using alcohol or d Finally, the remaining 12 agreements are silent on excluded conduct, and discuss indemnification of negligence without explicit regard to its degr Often, in these cases, the degree of negligence will depend on state law, rugs. ee. This includes an agreement in which the liability and indemnity provisions exclude certain conduct for liability up to $5 million and include it for liability above $5 million. and a determination concerning the enforceability of the provision may require litigation. Freight railroads often set a requirement for a certain level of indemnification in the agreements and corresponding insurance requirements to ensure that the commuter rail agency will have the resources to pay for claims. The required level of insurance in existing commuter rail agency and freight railroad agreements ranges from $ 75 million to $500 million. Agreements vary on the exact requirements for insurance, such as what level of liability can be absorbed by the comm rail agency—referred to as a self-insured retention—before the railroa must use commercial insurance. For example, some agreements that we analyzed set the level at which the commuter rail agency must purchase insurance for risk at above $5 million, while other agreements set the leve at $1 million. Twelve of the 33 agreements between commuter rail agencies and freight railroads are silent on the exact level of insurance required. Appendix II contains a table summarizing the apportionment of liability in commuter and freight rail agreements. Similar to the agreements with freight railroads, commuter rail agencies’ agreements with Amtrak also are generally no fault. Specifically, 14 of the 17 agreements we reviewed between commuter rail agencies and Amtrak allocate liability on a no-fault basis, while 2 contain a combination of fault- based and no-fault provisions. The remaining agreement is fault-based. Regarding excluded conduct, 8 of the 17 agreements explicitly exclude certain conduct; the remaining agreements are silent concerning whether any conduct is excluded. Amtrak also sometimes requires certain levels of indemnification and corresponding levels of insurance to ensure that the commuter rail agency will have the resources to pay for claims. Append III contains a table summarizing the apportionment of liability in commuter rail agency and Amtrak agreements. Amtrak’s agreements with Class I freight railroads are also genera fault arrangements. In addition, these agreements are generally silent on excluded conduct. Furthermore, all of Amtrak’s agreements with freight railroads are silent on the amount of insurance Amtrak must carry to use freight-owned rights-of-way. ARAA requires Amtrak to maintain a minimum coverage for claims through insurance or self-insurance of least $200 million per accident. However, Amtrak officials stated that Amtrak carries more insurance than is required by this statute. Freight railroad, commuter rail agency, and Amtrak officials told us that ause no-fault agreements are the easiest way to settle liability claims bec they avoid the need for additional litigation to try to ascertain blame. Officials in Florida, for example, said a fault-based agreement would be ecause of the costs of much more expensive than a no-fault agreement b investigating accidents. These officials said that a no-fault agreement was the best way to compensate litigants quickly. Furthermore, officials at a freight railroad said that an accident can have multiple causes and an investigation may not settle which party was at fault; therefore, a fault- based approach can result in disputes between commuter rail agencies and freight railroads over which party is responsible for paying for claim These officials also said that contrary to some views, passenger and frei railroads have strong incentives to operate safely, even if they may not be liable for some accidents that they cause. Finally, Amtrak and freight railroad officials noted that no-fault agreements are fairly standard across the industry, and that these agreements are similar to agreements freight railroads use for access to each other’s infrastructure. The liability and indemnity provisions in commuter rail agency agreements with freight railroads have cost implications because premiums vary with the levels of insurance required. Eleven commut paying from $700,000 to $5 million in insurance premiums, representing less than 1 percent and up to about 15 percent of commuter rail agencies operating budgets. Newer and smaller (as defined by ridership) commuter rail agencies typically spend more of their operating budgets o insurance premiums, in part because they do not have an established claims record, which factors into the premiums that a commuter ra agency must pay to cover its potential risk. Officials at proposed commuter rail agencies told us that they anticipated spending a substantia portion of their operating budgets on insurance. For exa proposed commuter rail agency anticipated spending more than 20 percent of their operating budget on insurance premiums. However, these premiums could decrease once the commuter rail agency has an established claims record, particularly if the commuter rail agency has no accidents over several years of service. Because commuter rail agencies o are publicly subsidized, the premium costs for commuter rail agencies alsl mple, officials at a represent a cost to taxpayers. Furthermore, the potential for high premium r costs may impede or stop the development of new or expanded commute rail services, according to commuter rail agency officials. According to commuter rail agencies officials, certain liability and indemnity provisions expose commuter rail agencies to significant risks and, therefore, to potential costs. Although no-fault liability agreements are the norm, most assign more liability to commuter rail agencies than to freight railroads. Specifically, of the 31 agreements with all or some no- fault provisions we analyzed, 13 assign all liability for passengers to the commuter rail agencies and 7 assign all liability for passengers, as well as all liability for freight equipment, employees, and third parties, to the commuter rail agencies. In the remaining 11 agreements, freight railroads could be responsible for assuming some liability for passenger claims resulting from a collision. When accidents do occur, commuter rail agencies use both their self- insured retention and commercial insurance to pay for claims. Similar to the deductible on individual insurance policies, the self-insured retention is the amount specified in the liability insurance policy that the commute rail agency must pay before the insurance company pays for claims. For example, a commuter rail agency with a $2 million self-insured re tention must pay for all claims that are $2 million or less, while claims above $2 million would be covered by the insurance company. In most cases, th e self-insured retention is per incident, that is, a commuter rail agency would pay each time a claim fell within the self-insured retention, can be costly if there are many such claims in a given period. However, in most cases, these types of claims are fairly predictable for commuter rail agencies that have an established loss record, allowing the agencies to better plan and budget for costs they are likely to incur. Although most commuter rail agencies have commercial insurance policies to co claims from a potentially catastrophic incident, most commuter rail agencies stated that they had never exceeded their self-insured retention and, thus, had never filed a claim with an insurer. ARAA introduced tort reform measures that limit the overall damage passenger claims to $200 million, including punitive damages to the exten permitted by state law, against all defendants arising from a single accident or incident. ARAA also authorizes providers of passenger rail transportation to enter into contracts allocating financial responsibility fo claims. Congress introduced these measures in 1997 in response to concerns from freight railroads, commuter rail agencies, and Amtrak about the difficulties the parties were having in negotiating the use of freight railroads’ rights-of-way by Amtrak and the commuter rail agen These concerns were heightened after a 1988 district court decision put in doubt the enforceability of contractually negotiated indemnity provisions. Amtrak train and a Conrail train in Chase, Maryland, that resulted in 16 deaths and over 350 injuries. A Conrail engineer admitted, among other things, that the Conrail crew had recently used marijuana, was speedin and was operating a train in which an audible warning device had bee intentionally disabled. The engineer pleaded guilty to manslaughter and was given the maximum penalty. The plaintiffs in m brought against Conrail and Amtrak alleged that Conrail or Amtrak, or b and asserted entitlement to compensatory as well as punitive damages. Amtrak brought an action before the trial court seeking a declaration of the rights and obligations of the parties concerning the indemnification agreement, which required that Amtrak defend and indemnify Conrail fo r any claims and damages arising out of the Chase accident. The trial court held that Amtrak was not required to indemnify Conrail where ther e were allegations and a showing of gross negligenc wanton misconduct, intentional misconduct, or conduct so serious that it w arranted the imposition of punitive damages. The court found that public policy would not allow the enforcement of indemnification provisions that a ppear to cover such extreme misconduct, because serious and significant disincentives to railroad safety would ensue. cies. National Railroad Passenger Corp. v. Consolidated Rail Corp., 698 F. Supp. 951 (D.D.C. 1988), vacated on other grounds, 892 F.2d 1066 (D.C. Cir. 1990). We have previously concluded that the $200 million cap on passenger claims arising from a single rail accident applies to all commuter rail operators as well as to Amtrak, based on the plain language of the statute. The act creates a $200 million cap for passenger injuries arisin g “in connection with any rail passenger transportation operations over or rail passenger transportation use of right-of-way or facilities owned, leased, or maintained by any high-speed railroad authority or operator commuter authority or operator, any rail carrier, or any State.” Additionally, the act defines a claim, in part, as “a claim made against Amtrak, any high-speed railroad authority or operator, any commuter authority or operator, any rail carrier, or any State.” We also co that the cap does not apply to third-party claims—that is, claims by partie other than passengers. Some commuter rail agencies, however, have expressed uncertainty regarding whether the cap applies to them. In addition, some freight railroad officials have stated that although they believe the cap does apply to commuter rail agencies, they will not rely o the cap in determining the level of insurance that a commuter rail agency must carry until the cap’s applicability to commuter rail agencies has tested in a court of law. No courts have decided whether the cap applie commuter rail agencies. 49 U.S.C. § 28103(a)(1). The Second Circuit consists of all federal courts within Connecticut, New York, and Vermont. While the Court of Appeals stated in the O&G Industries opinion that it was the intent of Congress to permit indemnity agreements regarding any claims against Amtrak, STB, when setting the terms of agreements between Amtrak and freight railroads, has held that it is against public policy to indemnify an entity against its own gross negligence or willful For example, in a 2006 decision, STB held that an indemnity misconduct. provision could not be used to indemnify a freight railroad against its own gross negligence or willful misconduct, since such an interpretation w “contravene well-established precedent that disfavors such indemnification provisions” and would be contrary to provisions in the federal government’s rail transportation policy that requires STB to “promote a safe and efficient transportation system” and “operate facilities and equipment without detriment to the public health and safety.” ST staff told us that they could not speak for the board, but because the O&G Industries opinion involved preemption of a state statute, they were not sure that the opinion would have any effect on future STB decisions. The Rail Passenger Service Act of 1970 provides that Amtrak and the freight railroads may contract for Amtrak’s use of the facilities owned by the freight railroads. If the parties cannot agree on a contract, STB may order access and prescribe the terms and conditions of the contract, including compensation. reported as having little effect on negotiations of liability and indemnity provisions. Freight railroads’ business perspective. In negotiations between commuter rail agencies and freight railroads, the freight railroads’ business perspective influences their starting position for negotiations of liability and indemnity provisions. Commuter rail agencies do not have statutory access to freight-owned rights-of-way. Rather, as o infrastructure, freight railroads can decide whether to allow commuter r agencies to use their rights-of-way. Officials from freight railroads told that they are willing to share their infrastructure with commuter rail agencies when sharing makes business sense and does not impinge on their freight operations. From the freight railroads’ perspective, commuter rail agencies’ compensation offers for the use of freight-owned rights-of- way are often inadequate, and when they are not compensated for all of the costs incurred from hosting a commuter rail train, the result is that the fr eight railroads subsidize the commuter rail service. In addition, freight service is the freight railroads’ core business, and their ability to efficient move freight through their systems must be protected. As a result, freig railroad officials said they are unwilling to assume any additional risk fro allowing commuter rail agencies to use their rights-of-way. Understandably, freight railroads want to minimize their exposure to liability from any potentially large damage awards and associated costs that may result when they allow commuter rail agencies to operate on their rights-of-way. As a result, freight railroads have adopted what is referred to as the “but for” philosophy—that is, but for the presence of th commuter rail service, the freight railroad would not be exposed to cert risks; therefore, the freight railroad should be held harmless. Freight railroad officials stated that they must take this position to protect their businesses and shareholders from potential lawsuits that could financially ruin their company. To protect themselves from additional liability, freight railroads typically require that commuter railroads purchase liability insurance that covers both parties. Officials from several commuter rail agencies told us that they recognize and understand the freight railroads’ viewpoint. Nearly half of commuter rail agency officials acknowledged insurance as a cost of doing business, and eight mentioned that they would purchase the amount that they currently carry even if the freight railroad did not require them to do so. Officials from five commuter rail agencies said they purchase more insurance than is required in their agreements because they recognize that potential claims may exceed the amounts stated in their agreements. Financial conditions at the time of negotiations. Officials from several commuter ra of the freight railroads at the time of their negotiations affected the li ability and indemnity provisions. For example, officials from one commuter rail agency said they were able to secure favorable liabi indemnity provisions by providing revenue to two freight railroads were struggling financially in the early and mid-1990s. Officials from another commuter rail agency said that the terms of their agreements th at originated from freight railroad bankruptcies in 1983 are more favorable to the commuter rail agency than the agreements they have subsequently negotiated with other freight railroads. Over the last 25 years, freight rail try traffic has significantly increased and the financial health of the indus has improved. As a result, hosting commuter rail service is not a significant source of revenue for freight railroads. For example, officials from one freight railroad said that revenue from commuter rail agencies does not compensate for the associated capacity loss. Furthermore, officials from another freight railroad said that no amount of revenue from commuter rail agencies could sufficiently compensate them for the risk in assuming liability for passenger claims. il agencies and freight railroads said that the financial health Increased awareness or concern about liability and insurance requirements. Eight commuter rail agency and freight railro also said that the level of awareness or concern about liability issues has grown over time. For example, one commuter rail agency official said negotiations have become more difficult, in part, because both freight railroads and commuter rail agencies are more knowledgeable about liability issues—that is, freight railroads are now more precise in the te they require, and commuter rail agencies are more aware of the implications of these agreements. Officials from four of the five freight railroads that host commuter rail operations said that they now would agree to some terms that they had agreed to in the past. In some cases, these railroads are trying to renegotiate the liability and indemnity provisions in existing concern about changes in how courts interpret gross negligence and about the application of punitive damages. In particular, freight railroads expressed concern that what juries once viewed as normal negligence, they may now view as gross negligence; therefore, they want commuter rail agencies to indemnify them against both negligence and gross negligence. For example, one freight railroad views a new project as a “nonstarter” if the commuter rail agency refuses to indemnify the freight railroad for incidents involving gross negligence. Additionally, if a railroad is found guilty of gross negligence, a jury may award punitive damages; therefore, one freight railroad is trying to renegotiate its insurance agreements. Freight railroads also expressed provisions in a 25-year-old agreement to include coverage for punitive damages. Additionally, views on sufficient amounts of insurance have change d over time. Specifically, freight railroads are requiring more insurance coverage for new commuter rail projects than what they had required in some pas agreements. For example, officials from one freight railroad said that million seemed sufficient when the railroad signed an agreement with a commuter rail agency in 1992. However, these same officials stated that they now seek much higher levels of coverage to use their rights-of-way , citing concerns about potential lawsuits and large settlements awarded by juries. Similarly, officials from other freight railroads told us that, to the extent possible, they seek between $200 million and $500 million insurance when negotiating new agreements or renewing existing ones. Officials from two proposed commuter rail agencies noted that they anticipate it could be challenging and costly to obtain insurance cove for the amount of insurance the freight railroads are requiring them to obtain. Officials from freight railroads and commuter rail agencies also questioned how claims from the recent M a example, officials from one commuter rail agency stated that the Metrolink accident and current economic conditions could cause their insurance premiums to spike, and they are, therefore, exploring options to stabilize their insurance costs. etrolink accident will affect the mount of insurance required and the accessibility of insurance. For Federal and state laws. While ARAA has addressed many major liabili concerns, some freight railroads and commuter rail agencies are reluctant to rely on some of its provisions. We have previously reported th commuter rail authorities or operators, as well as Amtrak, are covered b the $200 million cap on awards for claims by or on behalf of rail passengers resulting from an individual rail accident. However, althoug a majority of the freight railroads and commuter rail agencies with w we spoke told us that the liability cap applies to commuter rail agen majority of freight railroads and a few commuter rail agencies expressed concern because the statute has not been tested in court. One freigh railroad has addressed this concern by including a clause in its agreeme that would reopen negotiations if the ARAA cap were overturned by a court or amended. Other freight railroads seek higher levels of insu coverage to mitigate their concerns about the ARAA cap. Officials from one commuter rail agency told us that the freight railroad wants to rance 500 million, which has been a sticking point in renegotiating e agreement. Officials from this freight railroad told us they are seeking increase the level of insurance in their existing agreement from $250 million to $ th $500 million in insurance, in part, because the cap has not been tested in court and because the cap does not cover third-party claims. For example, as we have reported, claims from third parties affected by a hazardous material release that might occur as a result of a commuter-freight collision would not be capped at $200 million. Officials from several freight railroads and commuter rail agencies said that the applicability ofthe $200 million liability cap to commuter rail agencies will likely be teste in court as a result of the recent Metrolink accident. Amtrak’s statutory rights influence the negotiations of liability and indemnity provisions in agreements between Amtrak and freight railroads as well as between Amtrak and commuter rail agencies. For example, because Amtrak has statutory access rights to freight rail infrastructure, Amtrak and freight railroads must reach an agreement for the shared use of freight-owned infrastructure, or, in the event of an impasse, STB will resolve the outstanding issues. Although the provisions in agreements between Amtrak and freight railroads vary, freight railroad officials said that their negotiation processes were fairly standardized as a result of Amtrak’s statutory access rights. In addition, Amtrak officials noted that Amtrak is prohibited from cross-subsidizing commuter rail agencies and freight railroads on the NEC for some costs. According to Amtrak officials, these statutes influence their negotiations with freight railroads and commuter rail agencies, and Amtrak cannot assume any additional liability for these parties in its agreements for the shared use of infrastructure. Specifically, Amtrak officials stated that Amtrak cannot assume liability for commuter rail agencies when allowing commuter rail agencies to use Amtrak’s infrastructure. As a result, Amtrak’s negotiations with commuter rail agencies generally result in no-fault liability and indemnity provisions in which the commuter rail agency assumes most of the liability. Commuter rail agency, freight railroad, and Amtrak officials also identified various types of state laws that influence negotiations of liability and indemnity provisions. The following information briefly describes examples of the different types of state laws that can influence negotiations. See table 1 for examples of these types of laws. Liability caps for railroads or transit agencies. Some state laws limit commuter rail agencies’ liability exposure for accidents. Sovereign immunity laws or tort caps. These laws limit the types o f claims that may be filed against public agencies and limit the amount liability to which a public agency can be exposed. te entities. Some Prohibition against public indemnification of priva state laws prohibit a public commuter rail agency from agreeing to any indemnification provisions. Prohibition against indemnification for negligence or gross negligen Some state laws prohibit indemnification of an entity against its own negligence or gross negligence. ce. State laws addressing punitive damages. Some state laws prohibit insuring against punitive damages. Additionally, in some states, commute re rail agencies are immune from paying punitive damages because they a public entities. Several factors that might be considered influential, such as commuter rail ow ving litt nership of shared-use infrastructure, were generally reported as ha le effect on liability and indemnity negotiations. age ter rail rail rail liab cur ser doe Off pur the one wit bill agr railmuter rail agencies’ ownership of infrastructure. Commuter rail ncies that own their infrastructure are not necessarily able to set th ms of their agreements with freight railroads. A majority of commute agencies that own their infrastructure purchased it from freight roads. In general, as a condition of the sale of infrastructure, freight roads maintain rights for continued freight use and require specific ility and indemnity provisions. For example, a commuter rail agency rently seeking to purchase a segment of freight track to expand its vice, but negotiations have stalled because the commuter rail agenc s not want to agree to certain liability and indemnity provision icials from one freight railroad said that in negotiations for the chase of rail lines, the liability terms are a trade-off for a lower c ost for infrastructure. For example, in this freight railroad’s negotiations with commuter rail agency, the price for purchasing the right-of-way hout attached liability and indemnity provisions would have been $1.3 ion; rather, the parties settled on a price of $150 million, with an eement for continued freight operations that included the freig road’s required liability and indemnity provisions. s. tent of use. The extent of a tenant commuter rail agency’s use of the host freight railroad’s infrastructure, which can be measured by such metrics as the number of trains or ridership, does not generally influence the off reli agreement. However, two of the insurance brokers with whom we spoke said that such metrics may be used to help calculate insurance premiums. According to one broker, however, a change in one of these metrics may n example, an increase from 100,000 to 200,000 daily passengers. liability and indemnity provisions. For example, one freight railroad icial said that the number of planned commuter trains is not specificall ed upon in determining the amount of insurance required in the ot affect the insurance premium unless the change is significant—for Funding of improvements on freight-owned infrastructure. Commuter rail agencies’ funding of infrastructure improvements, such as track upgrades, on freight infrastructure does not generally affect liability and indemnity provisions. Officials from one freight railroad said that such improvements do not compensate for the liability risks associated with a funding for infrastructure improvements may have other effects, such as influencing freight railroads’ initial willingness to enter into overall llowing passenger railroads to use freight infrastructure. However, negotiations for shared use or securing priority dispatching for commuter trains. Advanced safety technologies. Employing advanced safety technologies does not necessarily affect negotiations over liability and indemnity provisions. Although improved safety may not influence the liability and indemnity provisions, officials from three freight railroads or commuter rail agencies mentioned that improved safety could reduce insurance premiums. Similarly, two of the insurance brokers with whom we spoke said that a railroad’s safety program can influence the calculation of insurance premiums because improved safety reduces the likelihood of accidents and, therefore, decreases the likelihood that the insurance company will suffer a loss. However, according to Amtrak officials, although such technologies may reduce the incidents of smaller claims that fall within the self-insured retention, they may not reduce premiums for liability insurance until the long-term loss history for the rail agency improves. Commuter rail agency, Amtrak, and freight railroad officials identified several options for facilitating negotiations of liability and indemnity provisions, including amending ARAA, establishing alternatives to commercial insurance, increasing commuter rail agencies’ leverage in negotiations with freight railroads, and separating passenger and freight rail infrastructure. While each of the options could facilitate negotiations on liability and indemnity provisions, each option has advantages and disadvantages to consider. The discussion that follows is not intended to endorse any potential option, but instead to describe some potential ways to facilitate negotiations. Officials from commuter rail agencies, Amtrak, and freight railroads cited amending ARAA as an option for facilitating negotiations on liability and indemnity provisions. In particular, officials from commuter rail agencies and freight railroads stated that the statute should be amended to make it clear that the liability cap applies to commuter rail agencies, and officials from commuter rail agencies, freight railroads, as well as Amtrak, stated that the statute should be amended to include nonpassenger claims. Officials from commuter rail agencies, Amtrak, and freight railroads cited several advantages to amending ARAA. First, clarifying that the statute applies to commuter rail agencies would eliminate the uncertainty about its applicability in the absence of a court decision. In addition, such a clarification, along with the inclusion of nonpassenger claims in the liability cap, could lower costs for commuter rail agencies by limiting the amount of insurance that freight railroads require commuter rail agenciesto carry. For example, officials from one commuter rail agency stated thato make it clear that it applied to commuter rail gencies and covered nonpassenger claims, freight railroads would be less if ARAA were amended t a likely to seek insurance beyond the $200 million liability cap to cover claims to which the cap does not apply. Similarly, Amtrak officials state that including nonpassenger claims under the liability cap could reduc e freight Amtrak’s need for excess liability insurance. Officials from several ns railroads also noted that such changes could facilitate future negotiatio with commuter rail agencies. Officials from freight railroads also stated that a clear federal cap on liability for commuter rail agencies could eliminate the need to adapt to various state laws that can affect liability and indemnity negotiations. For example, according to officials from two freight railroads, a uniform, standardized cap that applies to all commuter rail agencies would preem damages for commuter rail claims at an amount lower than $200 million. pt some state laws, such as those that cap Commuter rail agency and freight railroad officials also cited several disadvantages to amending ARAA. Officials from one freight railroad stated that ARAA already applies to commuter rail agencies and limits the amount of liability insurance commuter rail agencies are required to obtain to $200 million. According to these officials, although there is some lack of clarity about the statute’s applicability to commuter rail agencies and the statute does not cover all types of claims, these issues can be addressed by requiring the commuter rail agency to obtain comprehensive insurance coverage or through other provisions in the agreements. For example, adequate insurance coverage can mitigate the issues that may arise from various and conflicting state laws by providing protection for various kinds of liabi lities. Similarly, although ARAA does not cover liability claims resulting from a hazardous materials spill, an agreement can be structured in such a way that these claims are covered. According to these freight railroad officials, the provisions in ARAA provide adequate protections for negotiating railroad liability and indemnity provisions. Furthermore, these officials told us it may be difficult to make some changes to the statute without opening up its entire liability section to reexamination. Finally, some commuter rail agency officials stated that amending ARAA could cause them to have less favorable liability provisions than they currently enjoy. For example, officials from several commuter rail agencies told us that they carry less insurance than the $2 million cap. According to officials from one commuter rail agency, amending ARAA to clarify that the $200 million liability cap applies to all commuter rail agencies could result in higher levels of insurance and increased costs for this commuter rail agency. Commuter rail agency, Amtrak, and freight railroad officials and representatives from the insurance industry identified the following three alternatives to traditional commercial insurance options that could increase the availability and affordability of liability insurance coverage: Insurance pool. A group of organizations with similar characteristics, such as a group of commuter rail agencies, pool their assets to obtain a single commercial insurance policy, rather than obtaining individual commercial insurance policies. Captive insurance. A privately held insurance company that issues policies, collects premiums, and pays claims for its owners, but does not offer insurance to the public. This company may be either a single-parent captive, which is owned by a single entity that insures the risks of its parent company, or a group captive, which is owned by multiple entities and the owners are also the policyholders. Usually, the owners of a group captive are fairly homogenous and have similar risks, such as a group of commuter rail agencies, although this is not a requirement of a captive. A captive would allow a commuter rail agency or a group of commuter rail agencies to self-insure for liability or provide liability insurance for its members outside of the traditional commercial insurance market. Risk retention group. Similar businesses with similar risk exposures create their own liability insurance company to self-insure their risks as a group. Risk retention groups were established through the Product Liability Risk Retention Act of 1981, as amended by the Liability Risk Retention Act of 1986, which partially preempts state insurance laws by allowing risk retention groups to operate in states in which they are not domiciled. Commuter rail agencies, therefore, could form a risk retentio group without having to consider the various state laws that could affect their liability negotiations with freight railroads. Commuter rail agencies’ insurance coverage is usually structured in layers of $25 m illion beyond the self-insured retention, up to the total amount of insurance coverage (e.g., $200 million). The lower layers are typically more expensive because claims are likely to fall inthese layers, rather than the layers covering the upper limits of the insurance coverage. oversight of each state outside of their state of domicile. Although no commuter rail agency or freight railroad currently participates in an insurance pool with other commuter rail agencies or freight railroads, some commuter rail agency officials told us they are interested in exploring this option for facilitating negotiations. In addition, officials from two freight railroads said they would consider joining an insurance pool as a way to pool their risk with other railroads, and several freight railroad officials also stated they would accept pooled insurance from commuter rail agencies as a valid option for providing liability coverage. Commuter rail agency, Amtrak, and freight railroad officials also identif several disadvantages to the various alternatives to traditional commercia insurance options identified. First, some commuter rail agency officials stated that their commuter operations were already very safe; therefore, they would not benefit from an insurance pool with other commuter rail agencies. Similarly, according to an insurance broker, larger commuter rail agencies, or those with a better risk profile, may not join a pool or might leave the pool if they could obtain cheaper insurance coverage on the commercial insurance market. Their decision not to participate in the poo would lead to adverse selection, with only smaller or riskier commuter rail agencies remaining in the pool, which could reduce some of the advantages that a pool would provide. Second, some commuter rail agenc officials stated that they have not had problems obtaining insurance because of the soft, or competitive, insurance market. According to an insurance broker, pooling insurance during a soft market is likely more expensive than obtaining individual commercial insurance policies because of the administrative and capital costs of maintaining an insurance captive or risk retention group. In addition, some commuter rail agency officials stated that insurance pools can be difficult to administer and require decisions about who will participate, whether participation will be voluntary or mandatory, and what should be done if claims exceed y the pool’s reserves. However, if the insurance market became less competitive, pooling might provide a more affordable option, particularly for new or smaller commuter rail agencies that could have difficulty obtaining coverage.l For example, Florida set up an insurance poo because of a severe shortage of catastrophe property reinsurance capacity, stricter policy terms and conditions, and sharp increases in property catastrophe cover rates following Hurricane Andrew. Finally, one commuter rail agency official stated that it could be difficult for participating agencies to reenter the commercial insurance market if, for example, an insurance pool falls apart because it is undercapitalized—that is, there is a risk for commuter rail agencies in ending their current insurance policies to join a pool. Amtrak officials also stated that these pooled insurance options are unlikely to be viable without federal financial backing or verifiable commercial reinsurance. Furthermore, officials from two freight railroads noted they would not likely join an insurance pool with commuter rail agencies because it is not in their business interests to help pay for claims involving passenger rail. Commuter rail agency, Amtrak, and freight railroad officials also identified several federal insurance options that could facilitate negotiations of liability and indemnity provisions. Specifically, several commuter freight railroad officials identified catastrophic incident insurance programs as potential models for providing railroad liability insuranc e. These insurance programs exist to cover risks that the private sector ha been unable or unwilling to provide by itself. Commuter rail agency and freight railroad officials most frequently identified the Price-Anderso as a model for providing railroad liability insurance. Under this m odel, commuter rail agencies would obtain primary insurance up to a certain amount and could pool their assets to obtain secondary insurance coverage for incidents with claims that exceed the primary insurance amount. The federal government could also be called upon to provide additional funding if an incident’s claims exceeded both the primary and secondary insurance coverage. Officials from one freight railroad stated that a Price-Anderson type of insurance program could address current n Act limitations in the railroad insurance market because the act contem appropriations if additional funding is needed, among other benefits. Similarly, insurance coverage provided under other federal government programs, such as terrorism insurance, also was cited as a potential m for providing railroad liability insurance coverage. For example, officials from one freight railroad stated that the fund established to compensate victims from the September 11, 2001, terrorism attacks could be us considering how to compensate victims of a catastrophic railroad incident. Commuter rail agency and freight railroad officials identified several advantages of a federally backed insurance program for railroads. For example, some of these officials stated that such programs could reduce insurance premiums, could spread out risk among participating rai and would ensure that claims could be paid to those affected by a high - cost, or catastrophic, incident. However, as we have previously reporte such programs also could crowd out private insurers and reduce the private market’s ability and willingness to provide coverage. a federal insurance program would expose the federal government to potentially significant claims on future resources, which could ultimately result in costs to taxpayers. GAO, Natural Disasters: Public Policy Options for Changing the Federal Role in Natural Catastrophe Insurance, GAO-08-7 (Washington, D.C.: Nov. 26, 2007). incentives that encourage freight railroads to cooperate with commuter rail agencies, such as tax incentives or service expansions in other are could do more to facilitate negotiations, according to these officials. Officials from a few commuter rail agencies also stated that having an independent entity mediate liability and indemnity negotiations between commuter rail agencies and freight railroads could be helpful if the parties reached an impasse. For example, officials from one commuter rail agency stated that a mediating body could facilitate negotiations by requiring the freight railroad to consider the commuter rail agency’s position. A provision in the Passenger Rail Investment and Improvement Act of 2008 recently extended STB’s role to mediate disputes between public authorities, including commuter rail agencies and host carriers. However, the mediation is nonbinding—that is, if the dispute cannot be resolved, there are no additional mechanisms in place to compel a resolution. ommuter rail agencies and freight railroads operating at different times of A few commuter rail agency and freight railroad officials identified separating passenger and freight traffic as an ideal, but cost-prohibitive, option for facilitating negotia tions on liability and indemnity provisions. Commuter and freight traffic could be separated temporally, with c the day, or physically, with commuter rail agencies and freight railroad operating on separate tracks either in the same corridor or in separate corridors. For example, the Utah Transit Authority purchased rights-of- way from Union Pacific and built new tracks in a parallel alignment with Union Pacific tracks. As a result, the commuter and freight operations do not share the same track, with a small exception, limiting the potent a collision. Similarly, officials from one freight railroad stated that in negotiations with an existing and new commuter rail agency, they are working to shift some of the freight operations onto different routes to minimize the interaction between commuter and freight trains. Sep passenger and freight infrastructure also could lower insurance cos commuter rail agencies because the potential for a catastrophic inc would be significantly reduced. According to officials from a few frei railroads, the potential for a catastrophic incident, although small, drives the indemnification provisions and insurance requirements of passenger and freight rail agreements. Although the Utah Transit Authority wa to purchase rights-of-way from Union Pacific, in most cases, purchasing arating ts for ident ght s able rights-of-way or constructing new tracks is cost-prohibitive and time- consuming for commuter rail agencies. For example, officials from one commuter rail agency examined whether to build new tracks for initiating its service, but the costs were much higher than the costs of buying the tracks and sharing them with the freight railroad and paying the associ insurance costs. In addition, capacity constraints, whether they are based on future growth projections or geographic limitations, make it difficult to separate passenger and freight traffic—through either temporal or physical separation. For example, officials from one commuter rail agency stated that the geography surrounding their commuter service makes capacity expansions very difficult and costly. The expeditious flow of people and goods through our transportation system is vital to the economic well-being of the nation. The moveme people and goods by rail is an important part of the nation’s transportation ystem and is likely to play an even greater role in the future as companies s ractive and communities look for ways to avoid highway congestion. An att feature of both commuter rail and intercity passenger rail is that they can operate on the same infrastructure as freight railroads. However, mixing passenger and freight traffic entails a certain level of risk. Fortunat accidents are rare, undoubtedly due in part to the safety focus of passenger and freight rail operators, but they can be deadly, as evidenced by the September 2008 Metrolink accident. As owners of most of the rail infrastructure in the United States, freight railroads determine whether to allow commuter rail operations on their infrastructure and set the terms and conditions, including the liability and indemnity provisions, of this access. To protect their business and shareholders, freight railroads understandably seek to shift the risks associated with allowing passenger traffic on freight-owned infrastructu to the commuter rail agencies. By accepting some of the liability and re indemnity provisions demanded by freight railroads, commuter rail agencies expose themselves, and ultimately taxpayers, to significant costs. s, Rejecting the liability and indemnity provisions sought by freight railroad however, can cause negotiations to stall or fail, meaning that new commuter rail systems or expansions may not be realized. Different options exist to help facilitate negotiations over liability and indemnity. All y of these options have advantages and disadvantages that must be carefull considered so that one form of rail does not succeed at the expense of the other. We provided a draft of this report to DOT, STB, and Amtrak for their review and comment prior to finalizing the report. Amtrak provided technical comments, which we incorporated where appropriate. DOT and STB had no comments on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from report date. At that time, we will send copies to other interested congressional committees; the Secretary of the Department of Transportation; the President and CEO of Amt Surface Transportation Board; and other parties. The report is also a vailable at no charge on the GAO Web site at http://www.gao.gov. rak; the Chief of Staff of the If you or your staffs have any questions about this report, please contact me at (202) 512-4431 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To gather information pertaining to our objectives, we cond semistructured interviews with officials from all identified existing commuter rail agen railroads, Amtrak, and state departments about the liability and indemnity provisions between railroads, the cies, proposed commuter rail agencies, Class I freight of transportation. We asked financial impact of these provisions, how the courts had interpreted these provisions, the factors that had influenced negotiations, and ways to facilitate negotiations. We conducted site visits to three existing commuter rail agencies, two proposed commuter rail agencies, and two Class I freight railroads. We selected existing commuter rail agencies that had agreements for access to rights-of-way, maintenance-of-way, and maintenance-of-equipment or operations with a Class I freight railroad and also had contracts with Amtrak. We also selected existing commuter rail agencies that had agreements with different freight railroads to de rmine if agreements te varied Class I freight ra across Class I freight railroads. (Becauseilroads generally own infrastructure in particular region of the country, we also s found that this criterion gave us geographic diversity for our site visits.) In addition, we selected existing commuter rail agencies on the basis of their ridership levels to ensure that we visited at least one commuter rail agency in the top third of ridership, middle third of ride ridership. We sele cted proposed commuter rail planning to the next 5 years. In addi visit at least one commuter rail agency that prop freight tracks and at least one commuter rail age freight tracks. Finally, we chose to visit Class I freight railroads with the highest number of contracts with commuter rail agencies. rship, and bot agencies to visit that are enter into contracts with different Class I freight railroads in tion, we selected sites to oses to purcha ncy that propos To ide comtrak and the resulting mmuter rail agencies, freight railroads, and A implications of those provisions, we requested and analyzed the liability and indemniommuter rail agencies ty sections of agreements between c and Class I freight railroads, commucies and Amtrak, and ter rail agen Amtrak and freight railroads. We analyzed and organized the provisions in these contracts and excluded commuter rail agencies that did not have agreements with either a Class I freight railroad or Amtrak. In addition, we included agreements from two proposed commuter rail agencies in our analysis because their agreements with freight railroads were final. However, we did not include information from the other proposed commuter rail agencies because they either did not have an agreement with a Class I freight railroad or Amtrak or because the agreements were still preliminary and subject to change. To ensure the reliability of the information we obtained, we corroborated information provided by commuter rail agencies, Amtrak, and Class I freight railroads. For example, we compared agreements received from a commuter rail agenc with agreements received from the freight railroad to ensure that the agreements were consistent. ntify liability and indemnity provisions in agreements among We conducted legislative research to identify federal statutes, federal an tate court cases, and STB decisions that related to contractual liability sand indemnity provisions of passenger and freight railroad agreements. We also asked Amtrak, STB, the commuter rail agencies, and the freight railroads for assistance in identifying these types of cases. In addition, we asked commuter rail agencies and state departments of transportation for state statutes that had an impact on the negotiation of contractual liabilityand indemnity provisions. We then synthesized and summarized the information that we obtained. To identify factors that affect negotiations of liability and indemnity provisions among passenger and freight railroads, we conducted a conten analysis of the information we collected from our semistructured interviews and site visits. This content analysis captured the extent to which representatives from existing and proposed commuter rail ag freight railroads, and Amtrak identified particular factors that affected their negotiations and the associated effects. In addition to determining the factors that were most commonly cited, this analysis enabled us to determine whether certain factors reportedly had little effect on the negotiations. We also interviewed four state departments of transportation, referred to us by commuter rail agencies, about state law that apply to liability and indemnity provisions and the effects of such laws on liability and indemnity provisions. To identify potential options for facilitating negotiations of liability indemnity provisions among passenger and freight railroads, we conducted a content analysis of the information we collected from our semistructured interviews and site visits. This content analysis captured potential options mentioned by the entities we interviewed, the assoc advantages and disadvantages from the perspectives of the entities interviewed, and the change in the federal role needed to execute the options. We also asked FRA and FTA officials and STB staff about the federal role in railroad negotiations and the potential impact of some of the options identified on the federal role. Furthermore, we interviewed three insurance brokers who represented commuter rail agencies in obtaining liability insurance to provide context for the process of securin insurance, the process of calculating premium costs, and alternative insurance mechanisms that could be applied to the railroad industry. We g also reviewed prior GAO reports on insurance markets for catastrophic incidents to identify comparable models for railroad liability insurance. Is contract fault- based or no- fault? What are provisions for freight- commuter collision? Are specific types of conduct excluded or explicitly included? No fault: each covers own (third parties shared) No fault: each covers own (third parties fault- based) No fault: commuter covers all liability No fault: each covers own (third parties and passengers are fault-based) No fault: commuter covers all liability No fault: each covers own (third parties shared) W provisions for freight- commuter collision? Is contract fault- based or no- fault? Are specific pes of ty conduct excluded or explicitly included? Canadian Pacific Commuter Railway (CP) No fault: eac covers own parties shared) No fault: each covers own (third parties shared) No fault: each covers own ( parties s hared) No fault: each covers own (third parties fault- based) No fault: ea covers own (thi parties and passengers fault-based) No fault: each covers own equipment (passengers shared) No fault: freight covers own and 50 percent of other liability No fault: each covers own (third parties shared) No fault: each covers own (third parties shared) Is contract fault- based or no- fault? What are provisions for freight- commuter collision? Are specific types of conduct excluded or explicitly included? No fault: each covers own (third parties and passengers are fault-based) No fault: each covers own (thi parties and passengers are fault-based) No fault: each covers own (th parties shared) Fault-based (third parties shared) Fault-bas parties shared) No fault: each covers own (third parties shared) Commuter (continge legislative approval) No fault: each covers own (thi parties shared) of commuter and freight rail lia s. h row of this table represents this relationship. If the liability a relationship, althoug and indemn h there may be more than one contract ity provisions are the same, the contracts are not r liability levels up to $25 mi llion and fault-based for liability negligent. .5 miles. On the rest o tracks. CSX and MB TA all other persons take liability , including p for their ow assengers, n employees and is assigned on the b perty, regard sis of fault. less of fault, but liabili ded only for CSX prop n the case of gross negligence. Provisions are generally no fault except in collision, but Metra covers all liability in excess of $2 million. ituations and could also result in allo is of fault. t in collision. Metrolink’s agreements with UP are no ls up to $25 million and fault-based in excess of $25 million up to $100 mill ion or $125 m illion depending on the tra cks. cumstances. Conrail. The New Mexico Department of Transportation is the owner. unt. roperty, regardless of fault, but liability d on the basis of fault. n employee is assigned on the b ess of fault, but liabilit asis of fault. ssengers, and UP indemnifies TRE for crossings. T). FDOT and CSX have an agreement e approval, as part of an agreement to additional tracks for the Central Fl mmuter Rail, described in note y. ES) in Oregon. However, the enger Program and CATS did not have ts for liability and indemnity, and WES had a contract with a non-Class I railroad. s. Is contract fault- based or no fault? What are provisions for Amtrak- commuter collision? Are specific types of conduct excluded or explicitly included? No fault: equipmen shared, all other liability Amtrak except residual damage o fault: commuter covers all liability Excluded No fault under covers all liability except Amtrak employees and intercity trains No fault: liability shared equally No fault: commuter covers all liability Excluded except Amtrak commuter employees No fault: commuter covers all liability Excluded except Amtrak employees and Amtrak intercity operations No fault: Amtrak cove No fault: commuter covers own equipment, employees and passengers and other damage, all other liabil No fault: each covers own, commuter Excluded covers third parties No fault: each covers own (third parties covered by Metro-North) Silent uipment, and property. illful or wanton misconduct explicitly men ed in contract. OT indemnifies bility above $300,000. lorida on which Tri-Rail and Amtrak operate. Amtrak is responsible for all liability above $75 million to $200 million. MBTA assumes liability for its employeessengers, except when an incident is the pment, and paresult of As, euis sole mtrak’ ence or omission. eglig f track around Chicago Union Station e Amtrak-owned and has a fault-based contract for track, excluded conduct is ut. ich Rail Runner and Amtrak operate. e for the Rhode Island Public Rail Corporation on Amtra k-owned lines. National Railroad Passenger Corp. v. Consolidated Rail Corp., 69 F. Supp. 951 (D.D.C. 1988), vacated on other grounds, 892 F.2d 1066 (D.C Cir. 1990) 8 . onclusion: A U.S. District Court ruled that the indemnification provisions C in an operating agreement between Amtrak and Conrail could not be enforced where there were allegations and a showing of gross negligence, recklessness, willful and wanton misconduct, intentiona conduct so serious as to warrant the imposition of punitive damages. ed the segment of the Northeast Corridor that runs etween Washington, D.C., and New York. Conrail used the Northeast Facts: Amtrak own b Corridor pursuant to a freight operating agreement. In January 1987, an Amtrak train collided with thre locomotives that had entered the path of the high-speed Amtrak passenger train. The accident resulted in 16 deaths and more than 350 injuries. Just before crossing over onto the track being used by the Amtrak train, the Conrail engineer and brakeman in control of the Conrail locomotives had failed to heed a series of slow and stop signals at or before a track juncture near Chase, Maryland. The Conrail engineer admitted to the following: that the Conrail crew had recently used marijuana, was speeding, was operating a train in which the cab signal had been rendered inoperative because the light bulb had been removed from it, and was operating a train in which an audible warning device had been intentionally disabled. He also admitted that he had failed to call signals to his brakeman, as required by applicable safety regulations, that he had failed to maintain a proper lookout, and that he had not adhered to the cab signals or the wayside signals. The engineer pleaded guilty to manslaughter and was given the maximum penalty for manslaughter, 5 years imprisonment and $1,000 in fines. The plaintiffs in many of the cases brought against Conrail and Amtrak alleged that Conrail or Amtrak or both committed reckless, wanton, willful, or grossly negligent acts and asserted entitlement to compensatory as well as punitive damages. Amtrak brought this action before the court seeking a declaration of the rights and obligations of the parties with respect to the indemnifications of the freight operating agreement. “Amtrak agrees to indemnify and save harmless Conrail and Conrail Employees, irrespective of any negligence or fault of Conrail or Conrail Employees, or howsoever the same shall occur or be caused, from any and all liability for injury to or death of any Amtrak Employee, or for loss of, damage to, or destruction of the property of any suc Amtrak Employee. “Amtrak agrees to indemnify and save harmless Conrail and Conrail Employees, irrespective of any negligence or fault of Conrail or Conrail Employees, or howsoever the same shall occur or be caused, from any and all liability for injuries to or death of any Amtrak Passenger and for loss of, damage to, or destruction of any property of any suc passenger.” The issue presented in the case was “whether Amtrak must indemnif Conrail for any damages—compensatory, punitive or exemplary—arising out of the Chase accident that are founded upon reckless, wanton, willful, or grossly negligent acts by Conrail.” The court found that the parties did not clearly manifest a mutual in the time they executed the freight operating agreement, or any previous agreement between the parties, for the indemnification provisions to accidents caused by gross negligence, recklessness, or wanton and willful misconduct warranting the imposition of punitive damages. In addition, the court found that public policy would not allow enf of indemnification provisions that appear to cover such extreme misconduct because serious and significant disincentives to railroad saf would ensue. Under District of Columbia law, contractual provisions may ingly, the be invalidated when they are contrary to public policy. Accord court ruled that Amtrak was not required to indemnify Conrail where th were allegations and a showing of gross negligence, recklessness, willful and wanton misconduct, intentional misconduct, or conduct so serious as to warrant the imposition of punitive damages. Apfelbaum v. National Railroad Passenger Corp., No. 00-178, 2002 WL 32342481, 2002 U.S. Dist. Lexis 20321 (E.D. Pa. 2002) Conclusion: The Southeastern Pe (SEPTA) entered into a contract in which it indemnified Amtrak against any and all liability arising from the use of 30th Street Station in Philadelphia. Under Pennsylvania law, a party may bring a claim against the Commonwealth of Pennsylvania only if the basis for the claim falls within one of the exceptions to immunity enumerated in the Pennsylvania nnsylvania Transportation Authority Sovereign Immunity Act. The U.S. District Court for the Eastern District of Pennsylvania found that the claim by the plaintiff did not fall within one of the statutory exceptions to immunity. Accordingly, the court found that the contractually negotiated indemnity agreement was unenforceable, stating that a Commonwealth agency could not waive its sovereign immunity by any procedural device, including a contract, and expose itself to liability prevented by the legislature. Facts: An individual alleged that she slipped and fell in the 30th Str Station in Philadelphia, Pennsylvania. The 30th Street Station is owned by Amtrak and a portion is leased to SEPTA through a lease agreement. As part of the lease agreement, SEPTA agreed to indemnify Amtrak against any and all occupation of 30th Street Station. The plaintiff named several defendants, ible including SEPTA, Amtrak, and the cleaning service companies respons liability arising from or in connection with the use or for maintaining the station. SEPTA moved for summary judgment claiming sovereign immunity. The other defendants argued that SEPTA waived its sovereign immunity when it agreed to Pennsylvania law, the Commonwealth of Pennsylvania enjoys immunity from suit except when the General Assembly has, by statute, expressly s waived the immunity. The court found that because the alleged dangerou condition resulting in injury was not caused by a defect in the property owned by Pennsylvania, the claim did not fall within any of the nine exceptions to immunity enumerated in the Pennsylvania Sovereign Immunity Act. Accordingly, the court found that the indemnity agreement was unenforceable. Maryland Transit Admin. v. National Railroad Passenger Corp., 372 F. Supp. 2d 478 (D. Md. 2005) Conclusion: Pursuant to an operating agreement between the Maryland Transit Administration (MTA) and Amtrak, MTA had agreed to indemnify Amtrak for any liability except that which was c negligence of Amtrak. An arbitration panel found Amtrak’s actions to be grossly negligent. The court, however, upheld a second arbitration panel’s h the insurance ruling that MTA was responsible for providing Amtrak wit coverage specified in the agreement notwithstanding the first arbitration panel’s finding that Amtrak was grossly negligent. Facts: An Amtrak passenger train proceeded through a stop indic collided with a commuter train, causing significant damage. An arbitratio panel found that an Amtrak engineer was guilty of gross negligence in causing the collision, and on the basis of language of the operating ation and n agreement, determined that MTA was relieved of any responsibility to indemnify Amtrak. The agreement essentially provided that MTA agreed to indemnify Amtrak for any liability that would not have arisen but for the existence of the commuter rail service, except for any liability that was caused by the gross negligence of Amtrak. A second arbitration panel independentl for providing insurance coverage to Amtrak notwithstanding the fact that Amtrak was found to be grossly negligent by the first arbitration panel. y found that MTA was responsible Amtrak sought confirmation from the United States District Court that MTA was required to provide insurance coverage to Amtrak notwithstanding the Amtrak engineer’s gross negligence, and MTA petitioned to vacate this award. The court confirmed the arbitration panel’s finding regarding MTA’s responsibility to provide insurance to Amtrak. O&G Industries v. National Railroad Passenger Corp., 537 F.3d 15 (2d Cir. 2008), petition for cert. filed (U.S. Jan. 14, 2009) (No. 08-895) Conclusion: The United States Court of Appeals for the Second Circuit held that a Connecticut statute that nullifies indemnity agreements insulating a party from its own negligence was preempted to the ext that it conflicted with 49 U.S.C. § 28103 (the provision of the Amtrak Reform and Accountability Act of 1997 that states that a provider of passenger rail transportation may enter into contracts that allocate financial responsibility for claims). The jury in the lower-court case found that Amtrak’s conduct was not reckless and Amtrak was not required to pay punitive damages, but the court held that even if the jury had found Amtrak’s conduct to be reckless, O&G Industries (O&G) would still be required to indemnify Amtrak. The court stated that it was the intent of Congress to allow Amtrak to enter into indemnity agreements with respect to any claims against Amtrak. The court also held that Amtrak could be indemnified against third party as well as passenger claims. Facts: O&G, a commercial construction company, contracted with the Connecticut Department of Transportation to perform work related to I-95 as it passed over Amtrak’s tracks in East Haven. Amtrak and O&G entered into a contract that permitted O&G to enter onto Amtrak property to perform the work. In the contract, O&G agreed to indemnify Amtrak. The indemnity agreement stated essentially that O&G would indemnify Amt rak irrespective of its negligence or fault from any and all losses and liabilities “arising out of in any degree directly or indirectly caused by or resulting from activities of or work performed by .” The agreement also stated that O&G would not indemnify Amtr fault of Amtrak was the sole causal fault of Amtrak, except for injury or death of employees of Amtrak and its contractors. ak where the negligence or An Amtrak train struck and killed an O&G employee who was working o the bridge. Amtrak brought a suit against O&G for indemnification. O&G Amtrak’s claim for contractual indemnification was barred by Connecticut General Statute § 52-572k and Connecticut public policy. The statute, based on public policy considerations, bars indemnification agreements in construction contracts that shield a party from its own negligence. Amtrak responded that 49 U.S.C. § 28103, which permits Amtrak to enter into indemnification agreements, preempted the Connecticut statute. The jury found that Amtrak was not reckless and so only had to pay compensatory damages and not punitive damages. The jury also found, however, that Amtrak had breached a material term of the contract by failing to safely operate its train in the area of the work site, relievin from an obligation to indemnify Amtrak. Amtrak moved for a judgment as a matter of law that O&G must inde it. Amtrak stated that notwithstanding the jury verdict, the facts of th accident fell within the wording of th that O&G was legally obligated to indemnify Amtrak. The lower court agreed and required O&G to indemnify Amtrak. mnify e e indemnification agreement, and O&G appealed its case to the United States Court of Appeals for the Second Circuit. The court held that the Connecticut statute that nullified indemnity agreements that insulated a party from its own negligence was preempted to the extent that it conflicted with federal law, and the facts of the accident fell within the wording of the indemnification agreement so that O&G was legally obligated to indemnify Amtrak. “As Judge Dorsey correctly noted in granting summary judgment to Amtrak, it was precisely the doubts cast by the Conrail decision over the validity of indemnity agreements by railroad parties that prompted Congress to enact § 28103(b) . . . . The broad, unqua language in § 28103(b) leaves no doubt as to the specific intent of Congress to sanction indemnity arrangements between Amtrak ‘and other parties’ with respect to any clai against Amtrak. See S.Rep. No. 105-85, at 5 (1997).” Deweese v. National Railroad Passenger Corp., 2009 WL 222986, 2009 U.S. Dist. LEXIS 6451 (E.D. Pa. 2009) (Memorandum of Decision) the Commonwealth had been preempted by 49 U.S.C. § 28103(b). The court agreed with Amtrak and against SEPTA. Facts: The plaintiff, Deweese, went to the Crum Lynne train station Ridley Park, Pennsylvania to board a SEPTA train bound for Philadelphia. When he arrived at the station, he learned that he had to board the train from the tracks on the opposite side of where he had entered the station. He attempted to cross the tracks and was struck by an Amtrak train, resulting in serious injuries. Amtrak owned the Crum Lynne station and SEPTA leased the station from Amtrak. The railroad tracks at the were owned by Amtrak as well. As part of the lease agreement between Amtrak and SEPTA, SEPTA agreed to indemnify Amtrak for all liabilit y which would not have occurred but for the existence of the commuter service provided by SEPTA. The agreement for access to the ra tracks contained a similar provision. Mass Transit Administration v. CSX Transportation Inc., 708 A.2d 298 (Md. 1998) Conclusion: The Court of Appeals of Maryl statute (which provides that an indemnification in a contract pertaining to construction is void and against public policy if the party indemnified isand held that a Maryland negligent) applies only to construction contracts, not to indemnificationprovisions in procurement contracts. Accordingly, the Mass Transit Administration (now the Maryland Transit Administration) was required to indemnify CSX Transportation (CSX). Facts: A Maryland Rail Commuter (MARC) train operated by CSX str backhoe that was performing maintenance on the track. The operator of the backhoe was a CSX contractor. No one was injured. The backhoe operator sued CSX for the value of the backhoe, and the parties se ttled. CSX then claimed indemnification from MTA, of which MARC is a part. MTA argued that the indemnification provision in the commuter rail passenger service agreement between CSX and MTA was void based on the Maryland statute that provides that an indemnification in a contract pertaining to the construction, alteration, repair, or maintenance of a building, structure, appurtenance, or appliance for damages arising from the negligence of the party indemnified is against public policy and is void and unenforceable. The Court of Appeals of Maryland held that MTA was required to indemnify C SX. The court stated that the Maryland statute applies only to construction contracts, not to indemnification provisions in procureme contracts, and the contract did not become a construction contract because of the collision between a train and a backhoe. Pacific Insurance Co. v. Liberty Mutual Insurance Co., 956 A.2d 1246 (Del. 2008) Conclusion: The Supreme Court of Delaware held that the insurance policy providing Conrail with coverage would not violate a Delaware statute providing that public policy precludes contractual indemnification d for a party’s own negligence. The court held that the insurance purchaseto protect Conrail, once issued, could not be held unenforceable against Conrail. this case, but the case is relevant in onduct, insurance provisions still must be honored if this type of conduct Facts: Conrail is the only rail entity in that it holds if a state law prohibits indemnification for certain types of c occurs. This case involved an insurance coverage dispute that arose from fatal accidents t during a road construction project carried out by the Delaware Department of Transportation. Two wrongful death actions were file result. These actions were settled, but a dispute over coverage under two insurance policies remained. One of the insurance companies argued, among other things, that it was not required to provide the contractual coverage because of a state statute that precluded contractual indemnification for a party’s own negligence. The court held that the , once issued, could not be held insurance purchased to protect Conrail unenforceable against the indemnified party, even where the party was found to be negligent. hat occurred on a railroad crossing owned by Conrail Massachusetts Bay Transportation Authority and Massachus Bay Commuter Railroad Co. v. CSX Transportation Inc. and Cohenno Inc. (Super. Ct. Civ. Action 2008-1762-BLS1) (Memorandum and Order on Defendant CSX Transportation, Inc.’s Motions to Dismiss) Conclusion: The Business Litigation Session of the Massachusetts Superi e Court held that provisions that indemnify CSX are unenforceable on th basis of the Massachusetts common law to the extent that the contractual provisions indemnify CSX against its own gross negligence or reckless or intentional conduct. Facts: In March 2008, a freight car that had been delivered by CSX to a shipping depot rolled down the siding at the top of a hill, where it cr into a commuter train with roughly 300 passengers, injuring many. The 1985 trackage rights agreement between the Massachusetts Bay Transportation A indemnify CSX “irrespective of any negligence or fault . . . from any and all liability, damage, or expense of any kind” arising out of damages or in to any MBTA employee or other contractor of MBTA or out of damage MBTA property. MBTA, and its contractor MBCR, filed a lawsuit with the Business Litigation Session of the Massachusetts Superior Court seeking a declaration that CSX was liable for the damages arising from the accident . CSX filed uthority (MBTA) and CSX states that MBTA will a motion to dismiss for failure to state a claim. The Rail Passenger Service Act of 1970 provides that Amtrak and freigh railroads may contract for Amtrak’s use of the freight railroads’ facilities. If the parties cannot agree upon a contract, the Surface Transportation Board (STB) may order access and prescribe the terms and conditions of the contract, including compensation. Application of the National Railroad Passenger Corp. under 49 U.S.C. 24309(a) — Springfield Terminal Railway Co., Boston and Maine Corp. and Portland Terminal Co., 3 S.T.B. 157 (1998) Conclusion: Amtrak petitioned STB to set terms and compensation for Amtrak’s use of track owned by the freight railroads in the Guilford Rail System. Amtrak agreed to indemnify the freight railroads for certain standard risks. STB determined that other residual damages arising out of Amtrak’s operations were an incremental cost for which Guilford was entitled to compensation. In addition, STB refused to require Amtrak to reimburse the freight carriers from damages due to the freight carriers’ gross negligence, recklessness, or wanton or willful conduct. Facts: Amtrak petitioned STB to set the terms and compensation for Amtrak’s use of freight carriers’ lines to provide passenger service between Boston and Portland, Maine. Amtrak asked STB, in prescribing the terms and conditions, to adopt Amtrak’s standard liability agreement with freight railroads, known as section 7.2. This section essentially allocates liability on a no-fault basis, that is, Amtrak agrees to indemnify the host railroad against liability resulting from any damages that occur to Amtrak employees, equipment, and passengers, regardless of fault, and the host railroad agrees to indemnify Amtrak against any liability resulting from damages to the host railroad’s employees or equipment, regardless of fault. In the proposed agreement at issue in this case, Amtrak agreed to assume full responsibility for the following types of damages: (1) injury or death to Amtrak employees or damage to their property, (2) injuries or death to Amtrak passengers and damage to their property, (3) damage to Amtrak equipment or property, and (4) injuries or death to any person or damage to property (other than property of Guilford and of its employees) proximately caused as a result of a collision of a vehicle or a person with an Amtrak train at a grade crossing. Amtrak proposed that the freight carriers assume liability for the followin types of damages that could occur because of Amtrak’s presence on the tracks, in return for a payment of approximately $17,000 per year: injury to trespassers and licensees; general indirect damages, such as environmental damage to houses near the tracks; and injuries or death to Guilford employees or damage to their property or to the property of Guilford. STB found that the liability for these “residual damages” arising out of Amtrak operations was an incremental cost for which the carriers were entitled to compensation. STB directed Amtrak to either fully indemnify the freight railroad for the residual damage categories, a had agreed to do for other damage categories; purchase insurance to cover the freight carrier’s assumption of liability for all such costs (i.e., without deductibles or low caps, even if that required the purchase of more than one policy); or combine the first two methods (by, e.g., purchasing insurance with a deductible or low cap, but agreeing to indemnify the freight railroads f damages that were subject to the deductible or cap). In addition, STB would not require Amtrak to reimburse the freight carriers from damages due to the freight carriers’ gross negligence, recklessness, or wanton or willful conduct. STB stated that statute requires that compensation levels reflect safety considerations, and thus the freight carriers should be encouraged to conduct th It also stated that public policy generally disfavors requiring one party to be responsible for another’s gross negligence or willful and wanton misconduct. e operations safely. Boston and Maine Corp. and Springfield Terminal Railway Co. v. New England Central Railroad, STB Finance Docket No. 34612 (2006) Conclusion: STB held that the indemnity provision in the operating agreement between Boston and Maine (B&M) and the New England Central Railroad (NECR) could not be used to indemnify NECR, which had been found to be grossly negligent, since such an interpretation woul “contravene well-established precedent that disfavors such indemnification provisions” and would be contrary to the rail transportation policy, which requires STB to “promote a safe and efficient transportation system” and “operate facilities and equipment without detriment to the public health and safety.” Facts: Pursuant to a previous Interstate Commerce Commission (ICC) order, B&M conveyed its “Connecticut River Line” to Amtrak subject to Amtrak’s granting B&M trackage rights on the line. Amtrak transferred the line to the Central Vermont Railway, which subsequently was purchased by NECR. NECR also took over the trackage agreement. A B&M train operating over the Connecticut River Line derailed. B&M sued NECR for breach of contract and tortuous injury due to gross negligence, recklessness, and willful misconduct concerning NECR’s alleged failure to maintain the line. NECR responded that any claims b on the condition of the track were barred by section 7.1 of the trackage rights order issued by ICC. B&M argued that NECR’s interpretation of section 7.1 was contrary to public policy because it would apportion all responsibility for the derailment to B&M even if the derailment was caused solely by grossly negligent, reckless, or willful misconduct by NECR. STB was called upon to determine whether ICC intended sec 7.1 to indemnify for gross negligence. tion STB held that section 7.1 should not be construed to absolve NECR of gross negligence since such an interpretation would “contravene well- established precedent that disfavors such indemnification provisions” and would be contrary to provisions in the federal government’s rail transportation policy that require STB to “promote a safe and efficient transportation system” and “operate facilities and equipment without detriment to the public health and safety.” In addition to the contact named above, Nikki Clowers, Assistant Alana Finley; Brandon Haller; Hannah Laufe; Nancy Lueke; and Aron Szapiro made key contributions to this report.
The National Railroad Passenger Corporation (Amtrak) and commuter rail agencies often share rights-of-way with each other and with freight railroads. Negotiating agreements that govern the shared use of infrastructure can be challenging, especially on issues such as liability and indemnity. As requested, this report discusses (1) the liability and indemnity provisions in agreements among passenger and freight railroads, and the resulting implications of these provisions; (2) federal and state court opinions and Surface Transportation Board (STB) decisions related to contractual liability and indemnity provisions of passenger and freight railroad agreements; (3) factors that influence the negotiations of liability and indemnity provisions among passenger and freight railroads; and (4) potential options for facilitating negotiations of liability and indemnity provisions. GAO obtained information from all existing and proposed commuter rail agencies, Amtrak, and major freight railroads through site visits or telephone interviews. GAO analyzed the liability and indemnity provisions in agreements between commuter rail agencies, Amtrak, and freight railroads. GAO also reviewed federal and state laws, STB decisions, and court cases related to liability and indemnity provisions. The Department of Transportation and STB had no comments on the report. Amtrak provided technical comments, which we incorporated where appropriate. GAO is not making recommendations in this report. The liability and indemnity provisions in agreements between commuter rail agencies and freight railroads differ, but commuter rail agencies generally assume most of the financial risk for commuter operations. For example, most provisions assign liability to a particular entity regardless of fault--that is, commuter rail agencies could be responsible for paying for certain claims associated with accidents caused by a freight railroad. The provisions also vary on whether they exclude certain types of conduct, such as gross negligence, from the agreements. The provisions also require that commuter rail agencies carry varying levels of insurance. Because commuter rail agencies are publicly subsidized, some liability and indemnity provisions can expose taxpayers as well as commuter rail agencies to significant costs. Federal statutes, STB decisions, and federal court decisions are instructive in interpreting liability and indemnity provisions, but questions remain. In response to industry concerns, Congress enacted the Amtrak Reform and Accountability Act of 1997 (ARAA), which limited overall damages from passenger claims to $200 million and explicitly authorized passenger rail providers to enter into indemnification agreements. However, questions remain about the enforceability and appropriateness of indemnifying an entity for its own gross negligence and willful misconduct. A federal court of appeals, in a recent decision regarding Amtrak, overturned an earlier court opinion, holding that it was against public policy to indemnify for gross negligence and willful misconduct because this could undermine rail safety. STB, however, has held, when setting the terms of agreements between Amtrak and freight railroads, that it is against public policy to indemnify an agency against its own gross negligence or willful misconduct. Several factors influence the negotiations of liability and indemnity provisions, including the freight railroads' business perspective, the financial conditions at the time of negotiations, the level of awareness or concern about liability, and federal and state laws. For example, some freight railroad officials told us they are requesting more insurance coverage for new commuter rail projects than what they had required in some past agreements, in part, because ARAA's liability cap has not been tested in court and does not cover third-party claims. Statutes governing Amtrak also influence the negotiations between Amtrak and other railroads. Options for facilitating negotiations on liability and indemnity provisions include amending ARAA; exploring alternatives to traditional commercial insurance; providing commuter rail agencies with more leverage in negotiations; and separating passenger and freight traffic, either physically or by time of day. For example, officials from commuter rail agencies and freight railroads suggested amending ARAA to expand the scope of the liability cap to include third-party claims. Although each of these options could facilitate negotiations on liability and indemnity provisions, each option has advantages and disadvantages to consider.
DOD expects unmanned aircraft systems to transform the battlespace with innovative tactics, techniques, and procedures and take on the so-called “dull, dirty, and dangerous missions” without putting pilots in harm’s way. The use of unmanned aircraft systems in military operations has increased rapidly since the fall of 2001, with some notable successes. Potential missions considered appropriate for unmanned systems have expanded from the original focus on the intelligence, surveillance, and reconnaissance mission area to limited tactical strike capabilities with projected plans for persistent ground attack, electronic warfare, and suppression of enemy air defenses. The Global Hawk, Predator, and Joint Unmanned Combat Air Systems are DOD’s three largest unmanned aircraft programs in terms of cost. (For more details on the three systems and their performance characteristics, see app. I.) Since the terror attacks in September 2001, defense investments in unmanned aircraft systems have exponentially increased. In the 10 years prior to the attacks, DOD invested a total of about $3.6 billion compared to the nearly $24 billion it plans to invest in the subsequent 10 years. DOD currently has about 250 unmanned aircraft in inventory and plans to increase its inventory to 675 by 2010 and to 1,400 by 2015. (These numbers are the larger systems and do not count numerous small and hand- launched systems used by ground forces.) In the fiscal year 2001 Defense Authorization Act, Congress set a goal that by 2010, one-third of DOD’s deep strike force will be unmanned in order to perform this dangerous mission; this would significantly increase the number of unmanned aircraft in DOD’s inventory. In addition, foreign countries and other federal agencies, including the Department of Homeland Security and the Interior Department, are expressing interest in unmanned aircraft systems. Table 1 shows the funding in the fiscal year 2006 Defense budget for research, development, procurement, and support of current and planned unmanned aircraft systems. The 2006 Quadrennial Defense Review contained a number of decisions that would further expand investments in unmanned systems and their use in military operations. The report states DOD’s intent to nearly double unmanned aircraft coverage by accelerating the acquisition of the Predator and the Global Hawk. It also restructures the J-UCAS program to develop an unmanned, long-range carrier-based aircraft to increase naval reach and persistence. It further establishes a plan to develop a new land-based, penetrating long-range strike capability by 2018 and sets a goal that about 45 percent of the future long-range strike force be unmanned. Officials told us that elements of the J-UCAS effort will be considered in Air Force analyses and efforts supporting future long-range strike capability. Unmanned aircraft systems are being developed under DOD’s acquisition policy, which emphasizes a knowledge-based, evolutionary approach to acquiring major weapon systems. This approach separates technology development from product development, as suggested by best practices. In implementing the policy, a critical first step to success is formulating a comprehensive business case that justifies the investment decision to begin development. The business case should validate warfighter needs and match product requirements to available resources, including proven technologies, sufficient engineering capabilities, adequate time, and adequate funds. Several basic factors are critical to establishing a sound business case for undertaking a new product development. First, the user’s needs must be accurately defined, alternative approaches to satisfying these needs properly analyzed, and quantities needed for the chosen system must be well understood. The developed product must be producible at a cost that matches the users’ expectations and budgetary resources. Finally, the developer must have the resources to design the product with the features that the customer wants and to deliver it when it is needed. If circumstances substantially change, the business case should be revisited and revised as appropriate. If the financial, material, and intellectual resources to develop the product are not available, a program should not move forward. Best practices indicate that the business case is best accomplished using an evolutionary (or incremental) approach that plans to deliver an early but relevant capability first, followed by definable and doable increments that ultimately achieve the full capability. Each increment is expected to have its own decision milestones and baseline—cost, schedule, and performance requirements. An acquisition strategy is the disciplined process employed by the service program office and prime contractor to manage the acquisition, deliver knowledge at key junctures to make further investments, and continue the program. The strategy implements the business case; sets schedules for developing, designing, and producing the weapon system; and establishes exit/entrance criteria to guide acquisition managers and executives through key program milestones to control and oversee the acquisition. While the Global Hawk and Predator both began as successful advanced concept technology demonstration (ACTD) programs, they have since adopted different strategies in system development that have led to different outcomes. The Global Hawk adopted a riskier acquisition strategy that has led to significant cost, schedule, and performance problems. Conversely, the Predator program pursued a more structured and evolutionary strategy more consistent with DOD’s acquisition policy guidance and has thus far experienced fewer negative outcomes. Following a successful ACTD, DOD approved an acquisition program in 2001 to incrementally develop and acquire systems similar to the demonstrators, now designated the RQ-4A (Global Hawk A). In 2002, the Global Hawk program was substantially restructured to more quickly develop and field a new, larger, and more advanced aircraft, designated the RQ-4B (Global Hawk B). The new acquisition strategy was now highly concurrent, overlapping technology development, design, testing, and production. Our November 2004 report on Global Hawk, raised concerns about the revised strategy and its elevated risks of poor cost, schedule, and performance outcomes. We recommended limiting procurement to only those aircraft needed for testing to allow product knowledge to more fully mature and the design and technologies to be tested before committing resources to the full program. DOD officials did not agree because, in their opinion, we overstated some risks and they were effectively mitigating other risks. The Global Hawk program is already experiencing problems that are associated with high concurrency and gaps in product knowledge. Production of the larger Global Hawk B aircraft began in July 2004 with immature technologies and an unstable design. The design had been expected to be very similar to the smaller Global Hawk A, whose performance had been proven in the ACTD, but as the larger aircraft design matured and production geared up, the differences were more extensive, complex, and costly than anticipated. Within a year, there were more than 2,000 authorized engineering drawing changes to the total baseline of 1,400 drawings, and more than half were considered major changes. Also, once manufacturing began, there were recurring quality and performance issues on the work of several key subcontractors. The subcontractor building the tail scrapped seven of the first eight main structural components because of design changes and manufacturing process deficiencies. The wing manufacturer had to terminate a key subcontractor because of poor performance and quality. Other suppliers delivered parts late and with defects. These specific problems have mostly been resolved, but the potential for even greater problems exists when the major subsystems, still in development, are integrated into the new larger aircraft already being produced. Outcomes so far have not been good, as the program has experienced significant cost increases. Extensive design changes contributed to a $209 million overrun in the development contract and resulted in a more expensive production aircraft than forecast. Requirements growth, increased costs of airframe and sensors, and increased support requirements significantly increased procurement costs. In April 2005, the Air Force reported to Congress a Nunn-McCurdy breach in procurement unit costs—an 18 percent increase over the program’s cost baseline approved in 2002. In December 2005, we reported the Air Force had failed to report $401 million in procurement costs and that the procurement unit cost had actually increased 31 percent. Subsequently, in December 2005, the Air Force renotified Congress that, if these additional costs were included, the procurement unit costs had actually increased by over 25 percent and that program acquisition unit costs (including development and military construction costs in addition to procurement) had also breached the thresholds established in the law. Under the law, DOD must now certify the program to Congress. The Air Force is currently restructuring the Global Hawk program—the fourth restructuring since it began as a major acquisition. Program schedules and performance have also been negatively affected. For example, the start of operational assessment of the Global Hawk A slipped about 1 year, and the planned start of initial operational testing of the Global Hawk B design has slipped 2 years. The Director, Operational Test and Evaluation, reports that operational assessment of the Global Hawk A identified significant deficiencies in processing and providing data to the warfighter, communication failures, and problems with engine performance at high altitudes. In addition, planned delivery dates have continued to slip, the procurement for two aircraft were moved to later years, and some development work content was deferred or deleted; this means that the warfighter will not get anticipated capability at the time originally promised. For example, defensive subsystems required by Air Combat Command have been pushed off the schedule, and it is not known whether they will be added in the future. The frequent deployment of Global Hawk demonstrator aircraft to support combat operations has further affected costs and schedule, according to officials. Support to the warfighter is the program’s top priority. Deployments have resulted in increased costs and time delays for acquisition but, at the same time, provide a valuable, realistic test for the system and its employment concepts to improve its performance and responsiveness to the warfighter. Fleet flying hours now exceed 8,000 hours, more than half in combat operations. The following table shows changes in cost and quantities since the program started in March 2001. The restructured program tripled development costs, reflecting the addition of the new Global Hawk B aircraft with advanced capabilities still in technology development. Total procurement costs increased moderately, resulting from higher costs for the new aircraft tempered by a reduction in the number of aircraft to be acquired for reasons of affordability and changed requirements. Total program acquisition and procurement unit costs have increased 73 percent and 35 percent, respectively, and aircraft quantities decreased by 19 percent. Thus far, seven Global Hawk As have been delivered to the Air Force—14 percent of the combined fleet—and 34 percent of the planned budget to completion has been invested. The Predator program began in 1994 as an ACTD to demonstrate and deliver what would become the MQ-1 (Predator A). It evolved from an earlier unmanned aircraft, the Gnat, allowing delivery of an initial demonstrator aircraft to DOD 6 months after contract award. The Predator ACTD concluded in 1996 and transitioned to the Air Force in 1997 when the Defense Acquisition Board approved the Predator A for production. A limited strike capability, to launch Hellfire missiles against ground targets, was later added. On the basis of the success of the Predator A, the contractor designed and built two prototypes of a larger aircraft capable of armed reconnaissance and surveillance. This new aircraft would evolve into the second generation MQ-9 (Predator B), a larger and higher-flying aircraft with more strike capability. In February 2004, the Predator B program was approved as a new system development and demonstration program. It is managed separately from Predator A and has its own schedule and management reviews. The Predator program overall has experienced fewer cost, schedule, and performance problems than the Global Hawk program has experienced. As of February 2006, the Predator A program has a stable design with little cost growth and the Air Force recently increased its planned buys. Although early in the acquisition cycle, cost increases in the Predator B program have been moderate and schedule changes few. The fiscal year 2005 report of the Director, Operational Test and Evaluation, cited favorable developmental testing results and recommended refining acquisition and fielding strategies to permit more focused and effective operational testing. To date, about 59 percent of the combined fleet (as presented in last year’s budget) has been delivered for about 56 percent of the current planned budget. Deliveries include 129 Predator As and 2 prototype and six production Predator Bs. The combined fleet has tallied 120,000 flight hours since 1995. Congress has been supportive of both Predators, typically adding to annual funding requests and quantities. Table 3 summarizes changes in the Predator B program estimates to completion since its start of system development. The Global Hawk and Predator began with top leadership support and successful demonstration efforts as ACTDs, but differences in their business practices have been the primary contributors to different cost, schedule, and performance outcomes so far in these programs. Both programs were under pressure to field capabilities quickly to support the warfighter. Original models of both systems have proven to be valuable assets in combat operations, and both transitioned from technology demonstrations into weapon system acquisition programs with sound strategies to complete development and acquire initial systems with enhanced capabilities. However, Global Hawk subsequently changed to a riskier acquisition strategy that plans to develop technologies concurrently with the system design, testing, and production phases of the program. Predator, while not immune to typical developmental problems, has pursued a more disciplined, structured approach intended to evolve new capability in separate programs. Its decisions have been more consistent with DOD’s acquisition policy preferences. Table 5 shows some of the differences of the current programs that have led to greater success in the Predator program so far. The current Global Hawk acquisition strategy is risky. It plans to develop a new, larger, and more capable aircraft by integrating as yet undemonstrated technologies into a new airframe, also undemonstrated, to provide a quantum leap in performance over its ACTD. The Predator also added plans for a new, larger aircraft, but chose an incremental approach by managing the new investment in a separate program with separate decision points. The Global Hawk program began in 1994 as an ACTD, managed first by the Defense Advanced Research Projects Agency and, since 1998, by the Air Force. Seven demonstrator aircraft were built, logged several thousand flight hours, completed several demonstrations and other tests, and passed a military utility assessment. Demonstrators subsequently provided effective support to military operations in Iraq and Afghanistan. DOD judged the demonstration a success, but tests identified the need to make significant improvements in reliability, sensor performance, and communications before producing operationally effective and suitable systems. In March 2001, DOD approved the Global Hawk for a combined start of system development and limited initial production of six aircraft. The Air Force’s acquisition strategy approached best practices standards in terms of technology and design maturity. Officials planned to first acquire basic systems very similar to the successful demonstrators and then incrementally develop and acquire systems with more advanced sensors as critical technologies were demonstrated, using the same platform. Officials planned to acquire a total of 63 aircraft (Global Hawk As), and 14 ground stations for mission launch, recovery, and control. These aircraft would all be dedicated to single missions, some having imagery intelligence capabilities and others having signals intelligence capabilities. In 2002, the Air Force radically restructured the Global Hawk program to develop and acquire a larger and more advanced aircraft system, the Global Hawk B. The decision to acquire the larger aircraft was driven by the desire to have multimission capabilities (both signals intelligence and imagery intelligence sensors on the same aircraft) and to deliver new capabilities associated with advanced signals intelligence and radar technologies still in development. The new acquisition strategy abandoned an incremental approach and moved toward a strategy that called for concurrent development of technologies, systems integration, testing, and production. The Air Force planned to set and approve requirements and mature technologies over time, instead of at the start of development, and to do this at the same time as it designed and produced the new larger and heavier aircraft that had never been built or flight-tested. For affordability reasons and changing requirements, the restructured program also reduced quantities to 51 aircraft—7 Global Hawk As and 44 Global Hawk Bs—and 10 ground stations. Most of the Global Hawk Bs are planned to have multimission capabilities, including the advanced signals intelligence sensor, and some will have single-mission capabilities, including the advanced radar. Low-rate production was tripled from the 6 Global Hawk As approved at program start to 19 aircraft as restructured— 7 Global Hawk As and 12 Global Hawk Bs—about 40 percent of the entire fleet. To speed up development and field these new capabilities sooner, DOD also approved the program to streamline and accelerate acquisition processes, bypassing some normal acquisition policy requirements and controls when considered appropriate. For example, the Global Hawk B business case did not include a comprehensive analysis of alternatives that is intended to rigorously compare expected capabilities of a new system with the current capabilities offered by existing weapon systems, such as the signals intelligence capabilities provided by U-2 aircraft. Although the program could have reduced cost and schedule risks by managing a series of discrete increments to develop and acquire the different configurations, the Air Force chose to manage it as one program, with one baseline and one set of decision milestones. This revised strategy attempts to deliver capability to the warfighter that significantly surpasses that of the former Global Hawk A program. And the Air Force has committed up-front to produce the larger Global Hawk B aircraft in order to deliver new capabilities to the warfighter sooner, but the signals intelligence sensor and advanced radar technologies critical to meeting requirements are still immature and are not expected to be delivered and integrated until very late in the program. The Predator transitioned from its ACTD program in 1997, when the Defense Acquisition Board approved the Predator A for production, skipping the system development and design phases. The transition was not without difficulty because the focus during the demonstration effort had been to quickly ascertain operational capabilities, but without emphasis on design and development aspects that make a system more reliable and supportable—typically key aspects of a development program. The Air Force had to organize a team to respond to these issues until reliability and supportability issues could be resolved. Senior leadership, however, kept the strategy simple and focused on buying additional Predators very similar to the ACTD models. In February 2004, the Predator B program was approved as a new system development and demonstration program. The Predator B program was approved without two fundamental elements of a good business case: formal requirements documentation and an analysis of alternatives. According to the Air Force, these were not prepared because of the exigencies of the Global War on Terror. Officials initially planned to adopt an acquisition strategy similar to the Global Hawk’s, but senior leadership intervened and the acquisition strategy adopted was incremental and more consistent with DOD acquisition policy preferences. Under the revised strategy, the Air Force manages the Predator A and B acquisitions as separate programs. The new Predator B program balanced requirements and resources for a first increment and included its own sets of milestone decision points. Subsequent increments will evolve when future requirements and resources can be matched. Figure 1 contrasts notional Predator B and Global Hawk schedules for implementing their respective acquisition strategies with that espoused by best practices and DOD acquisition policy. Predator’s incremental approach with less overlap of technology and system development is more similar to best practices. Critical technologies were not sufficiently mature to support the start-up of the Global Hawk B program—particularly those associated with the signals intelligence and advanced radar, the very capabilities that drove the decision to acquire the larger aircraft. Likewise, the larger and heavier aircraft was neither prototyped nor demonstrated. The Predator B’s technologies were mostly mature at program start, and the aircraft has been built and flown. Mature technologies can leverage the potential for success in development, providing early assurance that the warfighter’s requirements can be met within cost and schedule goals. Although Global Hawk A technologies were demonstrated in the ACTD, the level of technology maturity significantly declined when Global Hawk B was approved for development. In particular, the new signals intelligence and multiplatform radar systems were still in technology development, not expected to be mature and be tested in an operational environment until sometime between 2009 and 2011. The spillover of technology development into product development and overall immaturity of technology increase risks of poor cost, schedule, and performance outcomes. For example, as the advanced sensors mature and become ready to be integrated into the aircraft, there is risk that the aircraft, already being produced, will not have sufficient space, power, or cooling or that the sensor systems will weigh more than planned, reducing aircraft performance and ability to meet overall mission requirements—altitude, speed, and endurance. Predator A has been in production since 1997 and its technologies are mature. All Predator B technologies, except for one, are mature. This one meets the DOD standard for maturity—demonstration in a lab environment—but has not yet met best practice standards that require demonstrations in an operational environment. This technology is important to manage the weapons that Predator B will carry and launch— more than those on Predator A. It relies on a data link that enables the operator to release the weapon from the ground. Program officials have stated that the current problems with this technology are related to its integration into the Predator B weapon system. In unmanned aircraft, unlike manned aircraft, there is no one in the cockpit to fire the weapon. To develop this capability required revisions to software, cryptologic controls, navigation sensors, and flight operations. The Air Force expects this capability to be demonstrated in an operational environment after it has been integrated into a Predator B in May 2006. The Global Hawk’s restructured program includes a significant overlap of technology, design, and production. The Predator B program is also concurrent, but to a lesser degree. Concurrency—the overlapping of development, test, and production schedules—is risky and can be costly and delay delivery of a usable capability to the warfighter if testing shows design changes are necessary to achieve expected system performance. Once in production, design changes can be an order of magnitude greater than changes identified during the design phase. By requiring a larger air vehicle to carry new advanced technologies while speeding up the acquisition schedule, the Air Force accepted much higher risks than the original plan, which followed a more evolutionary approach. The Air Force restructured the Global Hawk program, extending the development period, delaying testing, and accelerating aircraft production and deliveries, resulting in substantial concurrency. The development period was expanded by 5 years, and production deliveries were accelerated and compressed to fewer years, creating significant overlap from fiscal years 2004 to 2010. As a result, the Air Force plans to buy almost half of the new larger Global Hawk aircraft before a production model is flight-tested and operational evaluations are completed to show that the air vehicle design works as required. Substantially more than half of the aircraft will be purchased before the airborne signals intelligence and multiplatform radar, the two technologies that are required for the larger aircraft, complete development and are integrated for flight testing. The Predator B program’s revised strategy also overlapped development and production. For example, 21 Predator aircraft will be purchased before initial operational test and evaluation has been completed. Air Force officials acknowledge that the concurrency will require them to modify about 10 of these aircraft to bring them up to the full first increment capability. Modifications will include the installation of the system to manage and launch weapons and the digital electronic engine controller. Top management attention set the stage for the early success of Global Hawk. The Under Secretary of Defense for Acquisition, Technology, and Logistics became personally involved in establishing the original plan for development. Leadership insisted on fielding an initial capability that could be developed within a fixed budget while providing for an evolutionary process to add enhancements to succeeding versions. The result was a very successful ACTD program that produced seven demonstrators, logged several thousand flight hours, passed its military usefulness assessment, and has since very effectively supported combat operations in Afghanistan and Iraq. Once the Global Hawk was approved as a major acquisition program, however, senior Air Force leaders diverted Global Hawk to a high-risk spiral development strategy that featured frequent changes to development plans and time frames. They also approved the larger Global Hawk B with immature critical technologies and a highly concurrent test and production program—much of this contrary to best practices and defense acquisition policy preferences. The Predator also had top management attention early in the program and has maintained its high visibility through a high-ranking group of Air Force executives known as Task Force Arnold. Established in 2002 as a senior oversight body for the Predator, Task Force Arnold has provided guidance and headquarters-level direction to Air Combat Command on the needs and capabilities for the system. The group has played a valuable role in helping the Predator program maintain a tight focus on program requirements and direction. Once the Predator A became operational, Air Combat Command was besieged by requests from combatant commanders for additional enhancements or capabilities. To alleviate the problem, the task force acted as the arbiter for operational requirements. New capabilities had to be vetted and prioritized through the task force before they were incorporated. This kept a balance between requirements and available resources and reduced the burden on Air Combat Command and the program office, enabling the program to better manage its requirements. The task force was instrumental in revising the Predator B plans and acquisition strategy. On the basis of an assessment from Task Force Arnold, the Secretary of the Air Force directed that the program office field an interim combat capability to balance an urgent operational need with new acquisition. The Secretary also directed that the program office revise its acquisition strategy to incrementally develop the Predator. Accordingly, the Air Force restructured the program, dropping the spiral development plan for an incremental approach. This strategy extended the production schedule by 5 years and delayed initial operating capability by 3 years—lessening the degree of concurrency and providing more time to mature technology and design. Whereas the original strategy called for procuring 8 operational aircraft by August 2005, the revised, more conservative strategy plans to acquire 6 aircraft delivered 1 year later. Global Hawk funding requirements are optimistic, have changed, and continue to increase. In 2002 Global Hawk tripled estimated development costs and compressed the procurement of aircraft into fewer years. Program funding, which previously had been allocated relatively evenly across 20 years, was compressed into roughly half the time, tripling Global Hawk’s budgetary requirements in certain years. This adds to funding risk should large annual amounts be unaffordable as they compete with other defense priorities. The Air Force is currently preparing a new acquisition baseline estimate, its fourth baseline since the program started in March 2001. In contrast, Predator funding requirements are less optimistic and are spread over a longer production period. The stable Predator A program has been in production since 1997 and had been focused on replacing aircraft lost through attrition. However, the Air Force increased its buy quantities in the fiscal year 2007 budget to reflect increased future force requirements. The revised acquisition strategy for the Predator B extended the production period by 5 years and decreased annual buy quantities, resulting in more even and achievable levels of annual funding. Annual funding for both Predators has been increased by Congress in recent years, enabling the Air Force to procure additional Predator systems or make enhancements to the fielded systems. J-UCAS represents the next generation of unmanned aircraft. In addition to providing intelligence and surveillance capabilities, J-UCAS is being designed as a heavily weaponized and persistent strike aircraft. The joint Air Force and Navy technology demonstration combined the two services’ separate efforts to develop early models of advanced unmanned attack systems. Since the pre-acquisition program was initiated in 2003, it has experienced funding cuts and leadership changes. The recent Quadrennial Defense Review calls for again restructuring the program into a Navy effort to demonstrate an unmanned carrier-based system. Regardless of future organization, DOD still has the opportunity to learn from the lessons of the Global Hawk and Predator programs to develop the knowledge needed to prepare solid and feasible business cases to support advanced unmanned aircraft acquisitions. Before J-UCAS was established as a joint program, the Air Force and Navy had separate unmanned combat aircraft projects under way, each in partnership with the Defense Advanced Research Projects Agency (DARPA). In 2003, we reported that the Air Force’s original business plan provided time to mature technologies and was a relatively low-risk approach, but that plans and strategy had changed to a much accelerated and higher-risk approach. The new plan proposed to increase requirements and accelerate the schedule for development and production, substantially increasing concurrency of development, test, and production activities. The gaps in product knowledge and the unfinished technology development added significant risks of poor cost, schedule, and performance outcomes. Therefore, we supported DOD’s decision, under discussion at the time of our review, which advocated a new joint service approach and which reduced risks by significantly slowing down the Air Force’s plans. DARPA was then designated to lead a joint demonstration program with Air Force and Navy participation. The joint office began operations in October 2003 and devised a $5 billion pre-acquisition program that would develop and demonstrate larger and more advanced versions of the original Air Force and Navy prototypes (three from each contractor for a total of six aircraft). The office planned to conduct an operational assessment starting in 2007 and use the results to inform Air Force and Navy decisions for possible system acquisition starts in 2010. The demonstrators were expected to meet both the Air Force and Navy requirements and to share a common operating system, sensors, and weapons. Compared with the revised Air Force plans, the joint approach provided a more knowledge-based strategy with decreased risks of poor outcomes. The joint strategy delayed the start of system development, providing more time to mature the technologies, incorporate new requirements, and conduct demonstrations with prototype aircraft. In December 2004, the Office of the Secretary of Defense (OSD) reduced programmed funding by $1.1 billion and directed that funding and leadership be transitioned to the Air Force, with Navy participation, and that the joint program be restructured. The funding and leadership perturbations added about 19 months to the schedule for completing technology demonstration and deciding whether to start new system developments. The plan then was to develop and demonstrate five aircraft to inform system development decisions in fiscal year 2012. Now it appears the J-UCAS program will change one more time as the 2006 Quadrennial Defense Review directed its restructuring into a Navy program to develop an unmanned longer-range carrier-based aircraft capable of being air-refueled to provide greater standoff capability, to expand payload and launch options, and to increase naval reach and persistence. The Quadrennial Defense Review also directed speeding up efforts to develop a new land-based, penetrating long-range capability to be fielded by 2018. The Air Force is expected to use the accomplishments and technologies from the restructured J-UCAS program to inform the upcoming analysis of alternatives for the next generation long range strike program. The Air Force has a goal that approximately 45 percent of its future long-range strike force will be unmanned. Although the J-UCAS and follow-on efforts appear somewhat unstable as they go through these changes, we see benefits to this. Addition of requirements and changes in user needs can be determined prior to full program initiation. If done after an acquisition begins systems integration, these perturbations would be much more costly. The Navy’s restructured J-UCAS program, the Air Force’s new long-range strike effort, and other future programs have opportunities to learn lessons from the Global Hawk and Predator programs. As originally envisioned, the J-UCAS demonstration effort provided for an extended period of time to define warfighter requirements, mature and demonstrate technologies, inform the design with systems engineering, and conduct a thorough operational assessment to prove concepts and military utility. These kinds of actions would establish a foundation for a comprehensive business case and effective acquisition strategy. Key lessons that can be applied to J-UCAS and its offspring include maintaining disciplined leadership support and direction similar to that experienced early in Global Hawk from the Under Secretary of Defense for Acquisition, Technology, and Logistics and with the Predator’s Task Force Arnold; establishing a clear business case that constrains individual program requirements to match available resources based on proven technologies and engineering knowledge before committing to system development and demonstration; establishing an incremental acquisition strategy that separates technology development from product development and minimizes concurrency between testing and production; establishing and enforcing controls that require knowledge and demonstrations to ensure that appropriate knowledge is captured and used at critical decision junctures before moving programs forward and investing more money; and managing according to realistic funding requirements that fully resource product development and production based on a cost estimate that has been informed by proven technologies and a preliminary design. Additionally, lessons of the Global Hawk and Predator transitions from ACTDs into production and operation are important. The advanced concept technology demonstration can be a valuable tool to prove concepts and military utility before committing time and funds to a major system acquisition. However, designing in product reliability and producibility and making informed trade offs among alternative support approaches are key aspects of development. If these operational aspects of system development are not addressed early before production, they can have major negative impacts on life cycle costs. Finally, as the J-UCAS evolves one more time—and efforts return to the individual services—some key challenges will exist to maintain the advantages that were offered by a joint effort. The services need to be aware of those advantages and not arbitrarily reject them for parochial reasons. For example, exploiting past plans for common operating systems, components, and payloads is important to affordability. Common systems offer potential for cost savings as well as improved interoperability. In particular, the common operating system pursued by DARPA is a cutting edge tool to integrate and provide for interoperability of air vehicles, allowing groups of unmanned aircraft to fly in a coordinated manner and function autonomously (without human input). Global Hawk’s high-risk acquisition strategy resulted in increased costs and delays. The restructured Global Hawk program is very different from the original program that was approved in 2001 for a combined start of development and limited production. The restructured program replaced the original strategy to slowly and incrementally develop and acquire enhanced versions of the proven demonstrator, with a highly concurrent and accelerated strategy to develop and acquire a substantially new aircraft with much advanced capabilities still in technology development. Despite these major changes, officials essentially overlaid the new plans on the old and did not prepare a comprehensive business case to support the larger aircraft and justify specific quantities of the advanced signals intelligence and advanced radar capabilities. Predator B’s strategy is less risky, and as a result, the program has had moderate cost growth and has delivered assets in a timely manner. There are trends that run consistently through the Global Hawk and Predator programs, similar to trends in other major defense acquisition programs that we have reviewed. That is, when DOD provides strong leadership at an appropriate organizational level, it enables innovative, evolutionary, and disciplined processes to work. Once leadership is removed or diminished, programs have tended to lose control of requirements and add technical and funding risks. We have also found that after successful demonstrations to quickly field systems with existing technologies, problems were encountered after the programs transitioned into the system development phase of the acquisition process. The services pushed programs into production without maturing processes and also began to add new requirements that stretched beyond technology and design resources. Inadequate technology, design, and production knowledge increased risk and led to cost, schedule, and performance problems. J-UCAS has had a bumpy road with several changes in leadership and strategic direction. However, J-UCAS and its offspring as directed by the Quadrennial Defense Review will be at a good juncture to establish a sound foundation for developing the business case and an effective acquisition strategy for follow-on investments by better defining warfighter needs and matching them with available resources. Refining requirements based on proven technologies and a feasible design based on systems engineering are best accomplished in the concept and technology development phase that precedes the start of a system acquisition program. During this early phase, the environment is conducive to changes in requirements that can be accomplished more cost-effectively than after systems integration begins and large organizations of engineers, suppliers, and manufacturers are formed to prepare for the start of system production. We are making following recommendations to reduce program risk and increase the likelihood of more successful program outcomes by delivering capabilities to the warfighter when needed and within available resources. Specifically, The Secretary of Defense should direct the Global Hawk program office to limit production of the Global Hawk B aircraft to the number needed for flight testing until the developer has demonstrated that signals intelligence and radar imagery subsystems can be integrated and perform as expected in the aircraft, and update business case elements to reflect the restructured program to include an analysis of alternatives, a justification for investments in the specific quantities needed for each type of Global Hawk Bs being procured (signals intelligence and advanced radar imagery), and a revised cost estimate. The Secretary of Defense should direct the Navy and Air Force organizations responsible for the development efforts stemming from the former J-UCAS program to not move into a weapon system acquisition program before determining requirements and balancing them to match proven technologies, a feasible design based on systems engineering by the developer, and available financial resources; developing an evolutionary and knowledge-based acquisition strategy that implements the intent of DOD acquisition policy; and establishing strong leadership empowered to carry out the strategy that will work in conjunction with the other services to ensure the design and development continue to incorporate commonality as initiated under the DARPA-managed joint program. DOD provided us with written comments on a draft of this report. The comments appear in appendix II. DOD concurred with our three recommendations on the J-UCAS, but did not concur with our two recommendations on the Global Hawk. Separately, DOD provided technical comments, which we incorporated where appropriate. Regarding our recommendation to limit Global Hawk procurement, DOD stated that the program is managing risk and would test the signals intelligence sensor and advanced radar on other systems and transition them to Global Hawk when mature. DOD stated that our recommendation would stop the production line and incur significant cost and schedule delays. We continue to believe that limiting further Global Hawk B procurement to units needed for testing until the aircraft and its advanced technologies are integrated and operationally evaluated will lead to better program outcomes. The Global Hawk program is experiencing significant cost, schedule, and performance problems, and reducing procurement should lessen future program risks and allow more time to mature and test the new aircraft design and technologies before committing funds for most of the fleet. No Global Hawk B aircraft has completed production yet and first flight is not expected until November 2006. Initial operational test and evaluation of the basic aircraft design with only imagery intelligence capabilities has slipped into fiscal year 2009. According to the Air Force's current budget plans, more than one-half of the total Global Hawk B fleet will have been purchased before starting initial operational test and evaluation. Schedules for follow-on operational tests of the aircraft integrated with the advanced signals intelligence and radar technologies— the capabilities that drove the decision to acquire the larger aircraft—have also slipped. While we support Air Force efforts to first test these new capabilities on surrogate systems, our concern is again that, by the time the Air Force tests fully integrated Global Hawk systems in an operational environment, most of the aircraft will already be built or on order. If problems are revealed during testing of the aircraft and its technologies, they could require costly redesign and remanufacture of items already produced and further delay getting these capabilities to combatant commanders. There are several other compelling reasons to limit procurement plans: Projected delivery dates for the Global Hawk B continue to slip. Estimated delivery schedules in the fiscal year 2007 budget show that deliveries have slipped an average of almost 10 months since Global Hawk B production started in July 2004 and by an average exceeding 6 months in the last year alone. If any further slippage occurs, production may be a year or more behind what the Air Force's strategy and financial plan was built upon. With these delays, the Air Force should be able to reduce near-term buys and rebalance subsequent procurements without materially affecting the flow of production. Procurement through fiscal year 2006 will complete its approved low- rate initial production quantity of 19 aircraft. By law, a major weapon system cannot proceed beyond the low-rate quantity until initial operational test and evaluation has been satisfactorily completed as reported by the Director, Operational Test and Evaluation. Again, initial operational test and evaluation has been delayed until fiscal year 2009. In his annual report, the Director stated that low-rate production quantities should not be increased on the Global Hawk until after an adequate initial operational test and evaluation of the Global Hawk B aircraft and ground segments. Operational assessment of the smaller Global Hawk A is not yet complete. Testing and flight operations have experienced engine shut- downs, communication failures, and imagery data processing deficiencies. These problems directly affect the Global Hawk B because it uses the same engine and similar communication and data processing systems. Regarding our recommendation to update the Global Hawk’s business case, DOD stated that the department’s current Nunn-McCurdy certification evaluation and program rebaselining is thorough and provides department leaders with the information they need to make informed decisions. Because the Nunn-McCurdy certification and rebaselining effort is ongoing, we cannot comment on whether these documents will make up a comprehensive business case. However, given the magnitude of the program’s continuing changes and challenges discussed in this report, we are concerned that these efforts will fall short. A business case should be rigorously updated to reflect significant restructurings, to justify specific investments in new and emerging technologies, and to match revised requirements to available resources. Our apprehension is not unfounded. In November 2004, we similarly recommended that DOD delay further procurement of the Global Hawk B until a new business case—one that reduced risk and applied a knowledge-based approach—was completed. DOD chose not to concur with this recommendation, arguing that the department was effectively mitigating risk. Despite DOD’s assurances, events that triggered the Nunn- McCurdy review in April 2005 not only indicate that the risk mitigation measures were ineffective but underscore the wisdom of making a new business case. In addition to cost increases, schedule delays, and performance problems that have altered many of the program’s conditions and plans as they were originally envisioned, officials said they are rethinking Global Hawk test plans and low-rate quantities, which could affect the elements on which a business case is made. Our past work on major weapon systems acquisitions has clearly shown the value of preparing and maintaining a comprehensive business case to justify and guide investments, and the need to revisit the business case if circumstances substantially change, as they have on Global Hawk. To determine the extent to which Global Hawk and Predator acquisition strategies and business cases were effective in meeting warfighter requirements we reviewed budget and planning documents. We also utilized GAO’s Methodology for Assessing Risks on Major Weapon System Acquisition Programs to assess their acquisition strategies and business cases with respect to best practices criteria. The methodology is described from the best practices and experiences of leading commercial firms and successful defense acquisition programs. We interviewed DOD and contractor officials and obtained programmatic data and reports for the Global Hawk and Predator. We incorporated our recent Global Hawk and Predator Quick Look efforts and past GAO reports and testimony. We reviewed management plans, cost reports, progress briefings, and risk data to identify execution efforts and results to date. The primary comparisons made in the report are for the most part focused on the combined Global Hawk program and the Predator B program. Information on the Predator A program mainly provides a historical perspective and lessons learned from that older and more mature system. We received DOD comments questioning whether the Global Hawk and Predator B programs can reasonably be compared given the differences in time frames; Global Hawk’s system start was in March 2001, 3 years earlier than Predator B’s start in February 2004. While we agree that there may sometimes be a period of time before problems in a newer program become evident, we believe the two programs can be compared to provide valuable lessons for future acquisitions. First, concerns about acquisition strategy, concurrency, and funding profiles are not particularly dependent on time frames. Second, the DOD policy preference for incremental acquisitions used as criteria in comparing programs was in effect when both programs started. Third, the Global Hawk B, which comprises most of the Global Hawk program, did not begin production until after the start of Predator B. In a comparable time frame since then, the Predator B program has provided some interim combat capability and has production models flying and undergoing tests, while the first Global Hawk B is expected to make its first flight later this year. To identify what lessons can be learned and applied on the J-UCAS program, or its offspring, we interviewed DOD and contractor officials and obtained programmatic data and reports on J-UCAS. We used our comparisons of the Global Hawk and Predator, as well as past audit work on unmanned and manned systems, to identify factors conducive to successful programs and development of effective business cases and implementation strategies. We monitored the changes in J-UCAS leadership, priorities, and support within the department and Congress, including the most recent decisions by the Quadrennial Defense Review. We utilized also information obtained in past Quick Look and budget review efforts concerning J-UCAS. In performing our work, we obtained information and interviewed officials from the Global Hawk, Predator, and Joint Unmanned Combat Air Systems Program Offices, all at Wright Patterson Air Force Base, Ohio; Air Combat Command, Langley Air Force Base, Virginia; Northrop Grumman Integrated Systems, Rancho Bernardo and Palmdale, California; General Atomics Aeronautical Systems, San Diego and Palmdale, California; and DOD Task Force for Unmanned Systems, Office of the Secretary of Defense, Washington, D.C. We performed our review from August 2005 to February 2006 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense, the Secretary of the Air Force, and the Secretary of the Navy, and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please call me at (202) 512- 4841. Contact points for our offices of Congressional Relations and Public Affairs are listed on the last page of this report. The following staff made key contributions to this report: Michael Hazard, Assistant Director, Bruce Fairbairn, Rae Ann Sapp, Charlie Shivers, Adam Vodraska, and Karen Sloan. The Air Force’s Global Hawk system is a high-altitude, long-endurance unmanned aircraft with integrated sensors and ground stations providing intelligence, surveillance, and reconnaissance capabilities. After a successful technology demonstration, the system entered development and limited production in March 2001. Considered a transformational system, the program was restructured twice in 2002 to acquire 7 air vehicles similar to the original demonstrators (the Global Hawk A) and 44 of a new, larger, and more capable model (the Global Hawk B). Seven Global Hawk As have been delivered to the Air Force. Global Hawk Bs are in production with first flight and first delivery expected in fiscal year 2007. Demonstrators have seen combat operations in Iraq and Afghanistan and the first Global Hawk As recently arrived in-theater. The Predator began as a technology demonstration in 1994 and transitioned to an Air Force program in 1997. Predators have supported combat operations since 1995. Originally designed to provide tactical reconnaissance, the Predator A was modified in 2001 to employ Hellfire missiles, giving it a limited ground strike capability. In response to the Global War on Terror initiatives, the Air Force proposed a larger model carrying more weapons and flying higher and faster. The Predator B was approved as a new system development and demonstration program in February 2004. Funding plans at the time of our review were to procure a total of 232 Predators—181 A models and 63 B models—with additional future buys expected. Through calendar year 2005, 137 aircraft have been delivered, 8 Predator Bs and the rest Predator As. The Joint Unmanned Combat Systems (J-UCAS) program is a joint Air Force and Navy effort begun in October 2003 to develop and demonstrate the technical feasibility and operational value of a networked system of high-performance, weaponized unmanned aircraft. Planned missions include suppression of enemy air defenses, precision strike, persistent surveillance, and potentially others such as electronic attack as resources and requirements dictate. The program consolidated two formerly separate service efforts and was to develop and demonstrate larger, more capable, and interoperable aircraft to inform decisions on starting acquisition program(s) in fiscal year 2012. The Quadrennial Defense Review calls for restructuring J-UCAS into a Navy effort to develop an unmanned carrier-based aircraft, while the Air Force will consider J-UCAS technologies and accomplishments in its efforts to develop a new, land- based long-range strike capability. Figure 2 compares the salient performance characteristics of these unmanned aircraft systems. Unmanned Aircraft Systems: Global Hawk Cost Increase Understated in Nunn-McCurdy Report. GAO-06-222R. Washington, D.C.: December 15, 2005. Unmanned Aircraft Systems: DOD Needs to More Effectively Promote Interoperability and Improve Performance Assessments. GAO-06-49. Washington, D.C.: December 13, 2005. Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes. GAO-06-110. Washington, D.C.: November 30, 2005. DOD Acquisition Outcomes: A Case for Change. GAO-06-257T. Washington, D.C.: November 15, 2005. Defense Acquisitions: Assessments of Major Weapon Programs. GAO-05-301. Washington, D.C.: March 31, 2005. Unmanned Aerial Vehicles: Improved Strategic and Acquisition Planning Can Help Address Emerging Challenges. GAO-05-395T. Washington, D.C.: March 9, 2005. Unmanned Aerial Vehicles: Changes in Global Hawk’s Acquisition Strategy Are Needed to Reduce Program Risks. GAO-05-6. November 5, 2004. Defense Acquisitions: Assessments of Major Weapon Programs. GAO-04-248. Washington, D.C.: March 31, 2004. Unmanned Aerial Vehicles: Major Management Issues Facing DOD’s Development and Fielding Efforts. GAO-04-530T. Washington, D.C.: March 17, 2004. Force Structure: Improved Strategic Planning Can Enhance DOD’s Unmanned Aerial Vehicles Efforts. GAO-04-342. Washington, D.C.: March 17, 2004. Defense Acquisitions: DOD’s Revised Policy Emphasizes Best Practices, but More Controls Are Needed. GAO-04-53. Washington, D.C.: November 10, 2003. Defense Acquisitions: Matching Resources with Requirements Is Key to the Unmanned Combat Air Vehicle Program’s Success. GAO-03-598. Washington, D.C.: June 30, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapons System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001 Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Unmanned Aerial Vehicles: Progress of the Global Hawk Advanced Concept Technology Demonstration. GAO/NSIAD-00-78. Washington, D.C. April 25, 2000. Unmanned Aerial Vehicles: DOD’s Demonstration Approach Has Improved Project Outcomes. GAO/NSIAD-99-33. Washington, D.C.: August 16, 1999.
Through 2011, the Department of Defense (DOD) plans to spend $20 billion to significantly increase its inventory of unmanned aircraft systems, which are providing new intelligence, surveillance, reconnaissance, and strike capabilities to U.S. combat forces--including those in Iraq and Afghanistan. Despite their success on the battlefield, DOD's unmanned aircraft programs have experienced cost and schedule overruns and performance shortfalls. Given the sizable planned investment in these systems, GAO was asked to review DOD's three largest unmanned aircraft programs in terms of cost. Specifically, GAO assessed the Global Hawk and Predator programs' acquisition strategies and identified lessons from these two programs that can be applied to the Joint Unmanned Combat Air Systems (J-UCAS) program, the next generation of unmanned aircraft. While the Global Hawk and Predator both began as successful demonstration programs, they adopted different acquisition strategies that have led to different outcomes. With substantial overlap in development, testing, and production, the Global Hawk program has experienced serious cost, schedule, and performance problems. As a result, since the approved start of system development, planned quantities of the Global Hawk have decreased 19 percent, and acquisition unit costs have increased 75 percent. In contrast, the Predator program adopted a more structured acquisition strategy that uses an incremental, or evolutionary, approach to development--an approach more consistent with DOD's revised acquisition policy preferences and commercial best practices. While the Predator program has experienced some problems, the program's cost growth and schedule delays have been relatively minor, and testing of prototypes in operational environments has already begun. Since its inception as a joint program in 2003, the J-UCAS program has experienced funding cuts and leadership changes, and the recent Quadrennial Defense Review has directed another restructuring into a Navy program to develop a carrier-based unmanned combat air system. Regardless of these setbacks and the program's future organization, DOD still has the opportunity to learn from the lessons of the Global Hawk and Predator programs. Until DOD develops the knowledge needed to prepare solid and feasible business cases to support the acquisition of J-UCAS and other advanced unmanned aircraft systems, it will continue to risk cost and schedule overruns and delaying fielding capabilities to the warfighter.
The final regulations establish a new human capital system for DHS that is intended to assure its ability to attract, retain, and reward a workforce that is able to meet its critical mission. Further, the human capital system is to provide for greater flexibility and accountability in the way employees are to be paid, developed, evaluated, afforded due process, and represented by labor organizations while reflecting the principles of merit and fairness embodied in the statutory merit systems principles. Predictable with any change management initiative, the DHS regulations have raised some concerns among employee groups, unions, and other stakeholders because they do not have all the details of how the system will be implemented and impact them. We have reported that individuals inevitably worry during any change management initiative because of uncertainty over new policies and procedures. A key practice to address this worry is to involve employees and their representatives to obtain their ideas and gain their ownership for the initiative. Thus, a significant improvement from the proposed regulations is that now employee representatives are to be provided with an opportunity to remain involved. Specifically, they can discuss their views with DHS officials and/or submit written comments as implementing directives are developed, as outlined under the “continuing collaboration” provisions. This collaboration is consistent with DHS’s statutory authority to establish a new human capital system, which requires such continuing collaboration. Under the regulations, nothing in the continuing collaboration process is to affect the right of the Secretary to determine the content of implementing directives and to make them effective at any time. In addition, the final regulations state that DHS is to establish procedures for evaluating the implementation of its human capital system. High-performing organizations continually review and revise their human capital management systems based on data-driven lessons learned and changing needs in the environment. Collecting and analyzing data is the fundamental building block for measuring the effectiveness of these systems in support of the mission and goals of the agency. We continue to believe that many of the basic principles underlying the DHS regulations are generally consistent with proven approaches to strategic human capital management. Today, I will provide our preliminary observations on the following elements of DHS’s human capital system as outlined in the final regulations—pay and performance management, adverse actions and appeals, and labor-management relations. Last year, we testified that the DHS proposal reflects a growing understanding that the federal government needs to fundamentally rethink its current approach to pay and better link pay to individual and organizational performance. To this end, the DHS proposal takes another valuable step towards modern performance management. Among the key provisions is a performance-based and market-oriented pay system. We have observed that a competitive compensation system can help organizations attract and retain a quality workforce. To begin to develop such a system, organizations assess the skills and knowledge they need; compare compensation against other public, private, or nonprofit entities competing for the same talent in a given locality; and classify positions along levels of responsibility. While one size does not fit all, organizations generally structure their competitive compensation systems to separate base salary—which all employees receive—from other special incentives, such as merit increases, performance awards, or bonuses, which are provided based on performance and contributions to organizational results. According to the final regulations, DHS is to establish occupational clusters and pay bands that replace the current General Schedule (GS) system now in place for much of the civil service. DHS may, after coordination with OPM, establish occupational clusters based on factors such as mission or function, nature of work, qualifications or competencies, career or pay progression patterns, relevant labor-market features, and other characteristics of those occupations or positions. DHS is to document in implementing directives the criteria and rationale for grouping occupations or positions into clusters as well as the definitions for each band’s range of difficulty and responsibility, qualifications, competencies, or other characteristics of the work. As we testified last year, pay banding and movement to broader occupational clusters can both facilitate DHS’s movement to a pay for performance system and help DHS to better define occupations, which can improve the hiring process. We have reported that the current GS system as defined in the Classification Act of 1949 is a key barrier to comprehensive human capital reform and the creation of broader occupational job clusters and pay bands would aid other agencies as they seek to modernize their personnel systems. Today’s jobs in knowledge-based organizations require a much broader array of tasks that may cross over the narrow and rigid boundaries of job classifications of the GS system. Under the final regulations, DHS is to convert employees from the GS system to the new system without a reduction in their current pay. According to DHS, when employees are converted from the GS system to a pay band, their base pay is to be adjusted to include a percentage of their next within-grade increase, based on the time spent in their current step and the waiting period for the next step. DHS stated that most employees would receive a slight increase in salary upon conversion to a pay band. This approach is consistent with how several of OPM’s personnel demonstration projects converted employees from the GS system. The final DHS regulations include other elements of a modern compensation system. For example, the regulations provide that DHS may, after coordination with OPM, set and adjust the pay ranges for each pay band taking into account mission requirements, labor market conditions, availability of funds, pay adjustments received by other federal employees, and any other relevant factors. In addition, DHS may, after coordination with OPM, establish locality rate supplements for different occupational clusters or for different bands within the same cluster in the same locality pay area. According to DHS, these locality rates would be based on the cost of labor rather than cost of living factors. The regulations state that DHS would use recruitment or retention bonuses if it experiences such problems due to living costs in a particular geographic area. Especially when developing a new performance management system, high-performing organizations have found that actively involving employees and key stakeholders, such as unions or other employee associations, helps gain ownership of the system and improves employees’ confidence and belief in the fairness of the system. DHS recognized that the system must be designed and implemented in a transparent and credible manner that involves employees and employee representatives. A new and positive addition to the final regulations is a Homeland Security Compensation Committee that is to provide oversight and transparency to the compensation process. The committee—consisting of 14 members, including four officials of labor organizations—is to develop recommendations and options for the Secretary’s consideration on compensation and performance management matters, including the annual allocation of funds between market and performance pay adjustments. While the DHS regulations contain many elements of a performance-based and market-oriented pay system, there are several issues that we identified last year that DHS will need to continue to address as it moves forward with the implementation of the system. These issues include linking organizational goals to individual performance, using competencies to provide a fuller assessment of performance, making meaningful distinctions in employee performance, and continuing to incorporate adequate safeguards to ensure fairness and guard against abuse. Consistent with leading practice, the DHS performance management system is to align individual performance expectations with the mission, strategic goals, organizational program and policy objectives, annual performance plans, and other measures of performance. DHS’s performance management system can be a vital tool for aligning the organization with desired results and creating a “line of sight” showing how team, unit, and individual performance can contribute to overall organizational results. However, as we testified last year, agencies struggle to create this line of sight. DHS appropriately recognizes that given its vast diversity of work, managers and employees need flexibility in crafting specific performance expectations for their employees. These expectations may take the form of competencies an employee is expected to demonstrate on the job, among other things. However, as DHS develops its implementing directives, the experiences of leading organizations suggest that DHS should reconsider its position to merely allow, rather than require, the use of core competencies that employees must demonstrate as a central feature of its performance management system. Based on our review of others’ efforts and our own experience at GAO, core competencies can help reinforce employee behaviors and actions that support the department’s mission, goals, and values and can provide a consistent message to employees about how they are expected to achieve results. For example, an OPM personnel demonstration project—the Civilian Acquisition Workforce Personnel Demonstration Project—covers various organizational units within the Department of Defense and applies core competencies for all employees, such as teamwork/cooperation, customer relations, leadership/supervision, and communication. Similarly, as we testified last year, DHS could use competencies—such as achieving results, change management, cultural sensitivity, teamwork and collaboration, and information sharing—to reinforce employee behaviors and actions that support its mission, goals, and values and to set expectations for individuals’ roles in DHS’s transformation. By including such competencies throughout its performance management system, DHS could create a shared responsibility for organizational success and help assure accountability for change. High-performing organizations seek to create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. These organizations make meaningful distinctions between acceptable and outstanding performance of individuals and appropriately reward those who perform at the highest level. The final regulations state that DHS supervisors and managers are to be held accountable for making meaningful distinctions among employees based on performance, fostering and rewarding excellent performance, and addressing poor performance. While DHS states that as a general matter, pass/fail ratings are incompatible with pay for performance, it is to permit use of pass/fail ratings for employees in the “Entry/Developmental” band or in other pay bands under extraordinary circumstances as determined by the Secretary. DHS is to require the use of a least three summary rating levels for other employee groups. We urge DHS to consider using at least four summary rating levels to allow for greater performance rating and pay differentiation. This approach is in the spirit of the new governmentwide performance-based pay system for the Senior Executive Service (SES), which requires at least four levels to provide a clear and direct link between SES performance and pay as well as to make meaningful distinctions based on relative performance. Cascading this approach to other levels of employees can help DHS recognize and reward employee contributions and achieve the highest levels of individual performance. As DHS develops its implementing directives, it also needs to continue to build safeguards into its performance management system. A concern that employees often express about any pay for performance system is supervisors’ ability to assess performance fairly. Using safeguards, such as having an independent body to conduct reasonableness reviews of performance management decisions, can help to allay these concerns and build a fair, credible, and transparent system. It should be noted that the final regulations no longer provide for a Performance Review Board (PRB) to review ratings in order to promote consistency, provide general oversight of the performance management system, and ensure it is administered in a fair, credible, and transparent manner. According to the final regulations, participating labor organizations expressed concern that the PRBs could delay pay decisions and give the appearance of unwarranted interference in the performance rating process. However, in the final regulations, DHS states that it continues to believe that an oversight mechanism is important to the credibility of the department’s pay for performance system and that the Compensation Committee, in place of PRBs, is to conduct an annual review of performance payout summary data. While much remains to be determined about how the Compensation Committee is to operate, we believe that the effective implementation of such a committee is important to assuring that predecisional internal safeguards exist to help achieve consistency and equity, and assure non-discrimination and non- politicization of the performance management process. We have also reported that agencies need to assure reasonable transparency and provide appropriate accountability mechanisms in connection with the results of the performance management process. For DHS, this can include publishing internally the overall results of performance management and individual pay decisions while protecting individual confidentiality and reporting periodically on internal assessments and employee survey results relating to the performance management system. Publishing this information can provide employees with the information they need to better understand the performance management system and to generally compare their individual performance with their peers. We found that several of OPM’s personnel demonstration projects publish information for employees on internal Web sites that include the overall results of performance appraisal and pay decisions, such as the average performance rating, the average pay increase, and the average award for the organization and for each individual unit. DHS’s final regulations are intended to simplify and streamline the employee adverse action process to provide greater flexibility for the department and to minimize delays, while also ensuring due process protections. It is too early to tell what impact, if any, these regulations would have on DHS’s operations and employees or other entities, such as the Merit Systems Protection Board (MSPB). Close monitoring of any unintended consequences, such as on MSPB and its ability to manage cases from DHS and other federal agencies, is warranted. In terms of adverse actions, the regulations modify the current federal system in that the DHS Secretary will have the authority to identify specific offenses for which removal is mandatory. In our previous testimony on the proposed regulations, we expressed some caution about this new authority and pointed out that the process for determining and communicating which types of offenses require mandatory removal should be explicit and transparent. We noted that such a process should include an employee notice and comment period before implementation and collaboration with relevant congressional stakeholders and employee representatives. The final DHS regulations explicitly provide for publishing a list of the mandatory removal offenses in the Federal Register and in DHS’s implementing directives and making these offenses known to employees annually. In last year’s testimony, we also suggested that DHS exercise caution when identifying specific removable offenses and the specific punishment. When developing and implementing the regulations, DHS might learn from the experience of the Internal Revenue Service’s (IRS) implementation of its mandatory removal provisions. We reported that IRS officials believed this provision had a negative impact on employee morale and effectiveness and had a “chilling effect” on IRS frontline enforcement employees who were afraid to take certain appropriate enforcement actions. Careful drafting of each removable offense is critical to ensure that the provision does not have unintended consequences. Under the DHS regulations, employees alleged to have committed these mandatory removal offenses are to have the right to a review by a newly created panel. DHS regulations provide for judicial review of the panel’s decisions. Members of this three-person panel are to be appointed by the Secretary for three-year terms. In last year’s testimony, we noted that the independence of the panel that is to hear appeals of mandatory removal actions deserved further consideration. The final regulations address the issue of independence by prescribing additional qualification requirements which emphasize integrity and impartiality and requiring the Secretary to consider any lists of candidates submitted by union representatives for panel positions other than the chair. Employee perception concerning the independence of this panel is critical to the mandatory removal process. Regarding the appeal of adverse actions other than mandatory removals, the DHS regulations generally preserve the employee’s basic right to appeal decisions to an independent body—MSPB—but with procedures different from those applicable to other federal employees. However, in a change from the proposed regulations in taking actions against employees for performance or conduct issues, DHS is to meet a higher standard of evidence—a “preponderance of evidence” instead of “substantial evidence.” For performance issues, while this higher standard of evidence means that DHS would face a greater burden of proof than most agencies to pursue these actions, DHS managers are not required to provide employees performance improvement periods, as is the case for other federal employees. For conduct issues, DHS would face the same burden of proof as most agencies. The regulations shorten the notification period before an adverse action can become effective and provide an accelerated MSPB adjudication process. In addition, MSPB may no longer modify a penalty for a conduct- based adverse action that is imposed on an employee by DHS unless such penalty was “wholly without justification.” The DHS regulations also stipulate that MSPB can no longer require that parties enter into settlement discussions, although either party may propose doing so. DHS expressed concerns that settlement should be a completely voluntary decision made by parties on their own. However, settling cases has been an important tool in the past at MSPB, and promotion of settlement at this stage should be encouraged. The final regulations continue to support a commitment to the use of Alternative Dispute Resolution (ADR), which we previously noted was a positive development. To resolve disputes in a more efficient, timely, and less adversarial manner, federal agencies have been expanding their human capital programs to include ADR approaches, including the use of ombudsmen as an informal alternative to addressing conflicts. ADR is a tool for supervisors and employees alike to facilitate communication and resolve conflicts. As we have reported, ADR helps lessen the time and the cost burdens associated with the federal redress system and has the advantage of employing techniques that focus on understanding the disputants’ underlying interests over techniques that focus on the validity of their positions. For these and other reasons, we believe that it is important to continue to promote ADR throughout the process. Under the DHS regulations, the scope and method of labor union involvement in human capital issues are to change. DHS management is no longer required to engage in collective bargaining and negotiations on as many human capital policies and processes as in the past. For example, certain actions that DHS has determined are critical to the mission and operations of the department, such as deploying staff and introducing new technologies, are now considered management rights and are not subject to collective bargaining and negotiation. DHS, however, is to confer with employees and unions in developing the procedures it will use to take these actions. Other human capital policies and processes that DHS characterizes as “non-operational,” such as selecting, promoting, and disciplining employees, are also not subject to collective bargaining, but DHS must negotiate the procedures it will use to take these actions. Finally, certain other policies and processes, such as how DHS will reimburse employees for any “significant and substantial” adverse impacts resulting from an action, such as a rapid change in deployment, must be negotiated. In addition, DHS is to establish its own internal labor relations board—the Homeland Security Labor Relations Board—to deal with most agencywide labor relations policies and disputes rather than submit them to the Federal Labor Relations Authority. DHS stated that the unique nature of its mission—homeland protection—demands that management have the flexibility to make quick resource decisions without having to negotiate them, and that its own internal board would better understand its mission and, therefore, be better able to address disputes. Labor organizations are to nominate names of individuals to serve on the Board and the regulations established some general qualifications for the board members. However, the Secretary is to retain the authority to both appoint and remove any member. Similar to the mandatory removal panel, employee perception concerning the independence of this board is critical to the resolution of the issues raised over labor relations policies and disputes. These changes have not been without controversy, and four federal employee unions have filed suit alleging that DHS has exceeded its authority under the statute establishing the DHS human capital system. Our previous work on individual agencies’ human capital systems has not directly addressed the scope of specific issues that should or should not be subject to collective bargaining and negotiations. At a forum we co-hosted exploring the concept of a governmentwide framework for human capital reform, which I will discuss later, participants generally agreed that the ability to organize, bargain collectively, and participate in labor organizations is an important principle to be retained in any framework for reform. It was also suggested at the forum that unions must be both willing and able to actively collaborate and coordinate with management if unions are to be effective representatives of their members and real participants in any human capital reform. With the issuance of the final regulations, DHS faces multiple challenges to the successful implementation of its new human capital system. We identified multiple implementation challenges at last year’s hearing. Subsequently, we reported that DHS’s actions to date in designing its human capital system and its stated plans for future work on its system are helping to position the department for successful implementation. Nevertheless, DHS was in the early stages of developing the infrastructure needed for implementing its new system. For more information on these challenges, as well as on related human capital topics, see the “Highlights” pages attached to this statement. We believe that these challenges are still critical to the success of the new human capital system. In many cases, DHS has acknowledged these challenges and made a commitment to address them in regulations. Today I would like to focus on two additional implementation challenges— ensuring sustained and committed leadership and establishing an overall communication strategy—and then reiterate challenges we previously identified, including providing adequate resources for implementing the new system and involving employees and other stakeholders in implementing the system. As DHS and other agencies across the federal government embark on large- scale organizational change initiatives, such as the new human capital system DHS is implementing, there is a compelling need to elevate, integrate, and institutionalize responsibility for such key functional management initiatives to help ensure their success. A Chief Operating Officer/Chief Management Officer (COO/CMO) or similar position can effectively provide the continuing, focused attention essential to successfully completing these multiyear transformations. Especially for such an endeavor as critical as DHS’s new human capital system, such a position would serve to elevate attention that is essential to overcome an organization’s natural resistance to change, marshal the resources needed to implement change, and build and maintain the organizationwide commitment to new ways of doing business; integrate this new system with various management responsibilities so they are no longer “stovepiped” and fit it into other organizational transformation efforts in a comprehensive, ongoing, and integrated manner; and institutionalize accountability for the system so that the implementation of this critical human capital initiative can be sustained. We have work underway at the request of Congress to assess DHS’s management integration efforts, including the role of existing senior leadership positions as compared to a COO/CMO position, and expect to issue a report on this work next month. Another significant challenge for DHS is to assure an effective and ongoing two-way communication strategy that creates shared expectations about, and reports related progress on, the implementation of the new system. GAO has reported this is a key practice of a change management initiative. DHS’s final regulations recognize that all parties will need to make a significant investment in communication in order to achieve successful implementation of its new human capital system. According to DHS, its communication strategy will include global e-mails, satellite broadcasts, Web pages, and an internal DHS weekly newsletter. DHS stated that its leaders will be provided tool kits and other aids to facilitate discussions and interactions between management and employees on program changes. Given the attention over the regulations, a critical implementation step is for DHS to assure a communication strategy. Communication is not about just “pushing the message out.” Rather, it should facilitate a two-way honest exchange with, and allow for feedback from, employees, customers, and key stakeholders. This communication is central to forming the effective internal and external partnerships that are vital to the success of any organization. Creating opportunities for employees to communicate concerns and experiences about any change management initiative allows employees to feel that their experiences are acknowledged and important to management during the implementation of any change management initiative. Once this feedback is received, it is important to consider and use this solicited employee feedback to make any appropriate changes to its implementation. In addition, closing the loop by providing information on why key recommendations were not adopted is also important. OPM reports that the increased costs of implementing alternative personnel systems should be acknowledged and budgeted for up front. DHS estimates the overall costs associated with implementing the new DHS system—including the development and implementation of a new pay and performance system, the conversion of current employees to that system, and the creation of its new labor relations board—will be approximately $130 million through fiscal year 2007 (i.e., over a 4-year period) and less than $100 million will be spent in any 12-month period. We found that based on the data provided by selected OPM personnel demonstration projects, direct costs associated with salaries and training were among the major cost drivers of implementing their pay for performance systems. Certain costs, such as those for initial training on the new system, are one-time in nature and should not be built into the base of DHS’s budget. Other costs, such as employees’ salaries, are recurring and thus would be built into the base of DHS’s budget for future years. We found that approaches the demonstration projects used to manage salary costs were to consider fiscal conditions and the labor market and to provide a mix of one-time awards and permanent pay increases. For example, rewarding an employee’s performance with an award instead of an equivalent increase to base pay can reduce salary costs in the long run because the agency only has to pay the amount of the award one time, rather than annually. However, one approach that the demonstration projects used to manage costs that is not included in the final regulations is the use of “control points.” We found that the demonstration projects used such a mechanism—sometimes called speed bumps—to manage progression through the bands to help ensure that employees’ performance coincides with their salaries and prevent all employees from eventually migrating to the top of the band and thus increase costs. According to the DHS regulations, its performance management system is designed to incorporate adequate training and retraining for supervisors, managers, and employees in the implementation and operation of the system. Each of OPM’s personnel demonstration projects trained employees on the performance management system prior to implementation to make employees aware of the new approach, as well as periodically after implementation to refresh employee familiarity with the system. The training was designed to help employees understand their applicable competencies and performance standards; develop performance plans; write self-appraisals; become familiar with how performance is evaluated and how pay increases and awards decisions are made; and know the roles and responsibilities of managers, supervisors, and employees in the appraisal and payout processes. We reported in September 2003 that DHS’s and OPM’s effort to design a new human capital system was collaborative and facilitated participation of employees from all levels of the department. We recommended that the Secretary of DHS build on the progress that had been made and ensure that the communication strategy used to support the human capital system maximize opportunities for employee and key stakeholder involvement through the completion of design and implementation of the new system, with special emphasis on seeking the feedback and buy-in of frontline employees. In implementing this system, DHS should continue to recognize the importance of employee and key stakeholder involvement. Leading organizations involve employee unions, as well as involve employees directly, and consider their input in formulating proposals and before finalizing any related decisions. To this end, DHS’s revised regulations have attempted to recognize the importance of employee involvement in implementing the new personnel system. As we discussed earlier, the final DHS regulations provide for continuing collaboration in further development of the implementing directives and participation on the Compensation Committee. The regulations also provide that DHS is to involve employees in evaluations of the human capital system. Specifically, DHS is to provide designated employee representatives with the opportunity to be briefed and a specified timeframe to provide comments on the design and results of program evaluation. Further, employee representatives are to be involved at the identification of the scope, objectives, and methodology to be used in the program evaluation and in the review of draft findings and recommendations. DHS has recently joined some other federal departments and agencies, such as the Department of Defense, GAO, National Aeronautics and Space Administration, and the Federal Aviation Administration, in receiving authorities intended to help them manage their human capital strategically to achieve results. To help advance the discussion concerning how governmentwide human capital reform should proceed, GAO and the National Commission on the Public Service Implementation Initiative hosted a forum in April 2004 on whether there should be a governmentwide framework for human capital reform and, if so, what this framework should include. While there was widespread recognition among the forum participants that a one-size-fits-all approach to human capital management is not appropriate for the challenges and demands government faces, there was equally broad agreement that there should be a governmentwide framework to guide human capital reform. Further, a governmentwide framework should balance the need for consistency across the federal government with the desire for flexibility so that individual agencies can tailor human capital systems to best meet their needs. Striking this balance is not easy to achieve, but is necessary to maintain a governmentwide system that is responsive enough to adapt to agencies’ diverse missions, cultures, and workforces. While there were divergent views among the forum participants, there was general agreement on a set of principles, criteria, and processes that would serve as a starting point for further discussion in developing a governmentwide framework in advancing human capital reform, as shown in figure 1. As the momentum accelerates for human capital reform, GAO is continuing to work with others to address issues of mutual interest and concern. For example, to follow up on the April forum, the National Academy of Public Administration and the National Commission on the Public Service Implementation Initiative convened a group of human capital stakeholders to continue the discussion of a governmentwide framework. As GAO has worked to support Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people, this subcommittee and others in Congress have continually provided us with the tools and authorities we need to carry out these responsibilities. We believe that it is vitally important to GAO’s future that we continue modernizing and updating our human capital policies and practices in light of the changing environment and anticipated challenges ahead. Given our human capital infrastructure and our unique role in leading by example in major management areas, including human capital management, we believe that the federal government will benefit from GAO’s experience with pay bands, pay for performance and other human capital reforms. Unlike many executive branch agencies, which have either recently received or are just requesting new broad-based human capital tools and flexibilities, GAO has had certain human capital tools and flexibilities for over two decades. As a result of your continued support, GAO has been able to establish a successful track record with the implementation of pay banding, pay for performance, and other human capital authorities that have helped to ensure that GAO remains a world class, professional services organization. In July 2004, the President signed into law the GAO Human Capital Reform Act of 2004 (Human Capital II), which, as you know, combines diverse initiatives that, collectively, should further GAO’s ability to enhance our performance; assure our accountability; and help ensure that we can attract, retain, motivate, and reward a top-quality and high- performing workforce currently and in future years. It is our vision that these initiatives not only ensure a high-performing workforce at GAO, but also serve as guide to other agencies in their human capital transformation efforts. A key provision of Human Capital II is to allow the Comptroller General to adjust the rates of basic pay of GAO employees on a separate basis from the annual adjustments authorized for employees of the executive branch. GAO is implementing a compensation system that places greater emphasis on job performance while, at a minimum, protecting the purchasing power of employees who are performing acceptably and are paid within competitive compensation ranges. Since we testified before your subcommittee last summer, GAO has taken steps that will enable it to implement the pay adjustment provision. With the help of a human resources consulting firm, GAO developed new market-based compensation pay ranges for analysts, attorneys, and specialists that is already in the first phase of implementation. With the new market-based pay system, employee compensation will now consider current salary and allocate individual performance-based compensation amounts between a merit increase (i.e., salary increase) and a performance bonus (i.e., cash). This year, I provided all analysts, attorneys, and specialists performing at the “meets expectations” level or above the across-the-board pay adjustment applicable to the executive branch. Later this year, GAO plans to conduct a similar study of market-based pay for the remainder of GAO’s workforce, who began the transition to performance-based compensation in 2004 with the introduction of pay banding and a new competency-based performance appraisal system. In addition, I and other GAO senior executives have continued to engage in a broad range of outreach and consultation activities with GAO staff before and during the implementation of the new market-based pay system. For example, I met with senior executives and employee representatives to obtain input about a new market-based approach and held two televised chats to inform staff of the results of the review and our plans for implementation. In addition, links from the GAO internal home page were established that allowed employees to review a series of fact sheets and explanatory charts, and to access copies of the presentations. Summary Observations The final regulations that DHS has issued represent a positive step towards a more strategic human capital management approach for both DHS and the overall government, a step we have called for in our recent High-Risk Series. Consistent with our observations last year, DHS’s regulations make progress towards a modern compensation system. DHS’s overall efforts in designing and implementing its human capital system can be particularly instructive for future human capital reform. Nevertheless, regarding the implementation of the DHS system, how it is done, when it is done, and the basis on which it is done can make all the difference in whether it will be successful. That is why it is important to recognize that DHS still has to fill in many of the details on how it will implement these reforms. These details do matter and they need to be disclosed and analyzed in order to fully assess DHS’s proposed reforms. We have made a number of suggestions for improvements the agency should consider in this process. It is equally important for the agency to ensure it has the necessary infrastructure in place to implement the system, not only an effective performance management system, but also the capabilities to effectively use the new human capital authorities and a strategic human capital planning process. Without this infrastructure, DHS will not succeed in its related reform efforts. DHS appears to be committed to continue to involve employees, including unions, throughout the implementation process, another critical ingredient for success. Specifically, under DHS’s final regulations, employee representatives or union officials are to have opportunities to participate in developing the implementing directives, as outlined under the “continuing collaboration” provisions; hold four membership seats on the Homeland Security Compensation Committee; and help in evaluations of the human capital system. A continued commitment to a two-way communication strategy that allows for ongoing feedback from employees, customers, and key stakeholders is central to forming the effective internal and external partnerships that are vital to the success of DHS’s human capital system. Finally, to help ensure the quality of that involvement, sustained leadership in a position such as a COO/CMO would serve to elevate, integrate, and institutionalize responsibility for the success of DHS’s human capital system and other key business transformation initiatives. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information, please contact Eileen Larence, Acting Director, Strategic Issues, at (202) 512-6806 or larencee@gao.gov. Major contributors to this testimony include Michelle Bracy, K. Scott Derrick, Karin Fangman, Janice Latimer, Jeffrey McDermott, Lisa Shames, and Michael Volpe. The proposed human capital system is designed to be aligned with the department’s mission requirements and is intended to protect the civil service rights of DHS employees. Many of the basic principles underlying the DHS regulations are consistent with proven approaches to strategic human capital management, including several approaches pioneered by GAO, and deserve serious consideration. However, some parts of the system raise questions that DHS, OPM, and Congress should consider. To date, DHS’s actions in designing its human capital management system and its stated plans for future work on the system are helping to position the department for successful implementation. Nonetheless, the department is in the early stages of developing the infrastructure needed for implementing its new human capital management system. DHS has begun strategic human capital planning efforts at the Pay and performance management: The proposal takes another valuable step towards results-oriented pay reform and modern performance management. For effective performance management, DHS should use validated core competencies as a key part of evaluating individual contributions to departmental results and transformation efforts. January 2003. Congressional requesters asked headquarters level since the release of the department’s overall strategic plan and the publication of proposed regulations for its new human capital management system. Strategic human capital planning efforts can enable DHS to remain aware of and be prepared for current and future needs as an organization. However, this will be more difficult employees to appeal adverse actions to an independent third party. because DHS has not yet been systematic or consistent in gathering However, the process to identify mandatory removal offenses must be relevant data on the successes or shortcomings of legacy component collaborative and transparent. DHS needs to be cautious about defining human capital approaches or current and future workforce challenges. specific actions requiring employee removal and learn from the Internal Efforts are now under way to collect detailed human capital information Revenue Service’s implementation of its mandatory removal provisions. and design a centralized information system so that such data can be Labor relations: The regulations recognize employees’ right to organize gathered and reported at the departmentwide level. and bargain collectively, but reduce areas subject to bargaining. Continuing to involve employees in a meaningful manner is critical to the successful operations of the department. Secretary of DHS and the Director DHS generally agreed with the of the Office of Personnel findings of our report and provided Management (OPM) released for more current information that we public comment draft regulations incorporated. However, DHS was for DHS’s new human capital concerned about our use of results system. This testimony provides from a governmentwide survey preliminary observations on gathered prior to the formation of selected major provisions of the the department. We use this data proposed system. The because it is the most current subcommittees are also releasing information available on the Human Capital: Implementing perceptions of employees currently Pay for Performance at Selected in DHS and helps to illustrate the Personnel Demonstration Projects challenges facing DHS. (GAO-04-83) at today’s hearing. underscored their personal commitment to the design process. Continued leadership is necessary to marshal the capabilities required for the successful implementation of the department’s new human capital management system. Sustained and committed leadership is required on multiple levels: securing appropriate resources for the nearly half of DHS civilian employees are not covered by these design, implementation, and evaluation of the human capital regulations, including more than 50,000 Transportation Security management system; communicating with employees and their Administration screeners. To help build a unified culture, DHS should representatives about the new system and providing opportunities for consider moving all of its employees under a single performance feedback; training employees on the details of the new system; and management system framework. continuing opportunities for employees and their representatives to participate in the design and implementation of the system. DHS noted that it estimates that about $110 million will be needed to implement the new system in its first year. While adequate resources for program implementation are critical to program success, DHS is requesting a substantial amount of funding that warrants close scrutiny by Congress. The proposed regulations call for comprehensive, ongoing evaluations. DHS has begun to develop a strategic workforce plan. Such a plan can In its proposed regulations, DHS outlines its intention to implement key safeguards. For example, the DHS performance management system must comply with the merit system principles and avoid prohibited personnel practices; provide a means for employee involvement in the design and implementation of the system; and overall, be fair, credible, and transparent. The department also plans to align individual performance management with organizational goals and provide for reasonableness reviews of performance management decisions through its Performance Review Boards. Continued evaluation and adjustments will help to ensure an effective and credible human capital system. www.gao.gov/cgi-bin/getrpt?GAO-04-479T. be used as a tool for identifying core competencies for staff for attracting, developing, evaluating, and rewarding contributions to mission accomplishment. www.gao.gov/cgi-bin/getrpt?GAO-04-790. To view the full testimony statement, click on the link above. For more information, contact To view the full product, including the scope J. Christopher Mihm at (202) 512-6806 or and methodology, click on the link above. mihmj@gao.gov. For more information, contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov. The analysis of DHS’s effort to develop a strategic human capital management system can be instructive as other agencies request and implement new strategic human capital management authorities. DHS was provided with significant flexibility to design a modern human capital management system. Its proposed system has both precedent-setting implications for the executive branch and far- reaching implications on how the department is managed. GAO reported in September 2003 that the effort to design the system was collaborative and consistent with positive elements of transformation. In February, March, and April 2004 we provided preliminary observations on the proposed human capital regulations. To date, DHS’s actions in designing its human capital management system and its stated plans for future work on the system are helping to position the department for successful implementation. Nonetheless, the department is in the early stages of developing the infrastructure needed for implementing its new human capital management system. DHS has begun strategic human capital planning efforts at the headquarters level since the release of the department’s overall strategic plan and the publication of proposed regulations for its new human capital management system. Strategic human capital planning efforts can enable DHS to remain aware of and be prepared for current and future needs as an organization. However, this will be more difficult because DHS has not yet been systematic or consistent in gathering relevant data on the successes or shortcomings of legacy component human capital approaches or current and future workforce challenges. Efforts are now under way to collect detailed human capital information and design a centralized information system so that such data can be gathered and reported at the departmentwide level. Congressional requesters asked GAO to describe the infrastructure necessary for strategic human capital management and to assess the degree to which DHS has that infrastructure in place, which includes an analysis of the progress DHS has made in implementing the recommendations from our September 2003 report. DHS generally agreed with the findings of our report and provided more current information that we incorporated. However, DHS was concerned about our use of results from a governmentwide survey gathered prior to the formation of the department. We use this data because it is the most current information available on the perceptions of employees currently in DHS and helps to illustrate the challenges facing DHS. underscored their personal commitment to the design process. Continued leadership is necessary to marshal the capabilities required for the successful implementation of the department’s new human capital management system. Sustained and committed leadership is required on multiple levels: securing appropriate resources for the design, implementation, and evaluation of the human capital management system; communicating with employees and their representatives about the new system and providing opportunities for feedback; training employees on the details of the new system; and continuing opportunities for employees and their representatives to participate in the design and implementation of the system. In its proposed regulations, DHS outlines its intention to implement key safeguards. For example, the DHS performance management system must comply with the merit system principles and avoid prohibited personnel practices; provide a means for employee involvement in the design and implementation of the system; and overall, be fair, credible, and transparent. The department also plans to align individual performance management with organizational goals and provide for reasonableness reviews of performance management decisions through its Performance Review Boards. www.gao.gov/cgi-bin/getrpt?GAO-04-790. To view the full product, including the scope and methodology, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov. The federal government is in a period of profound transition and faces an array of challenges and opportunities to enhance performance, ensure accountability, and position the nation for the future. High- performing organizations have found that to successfully transform themselves, they must often fundamentally change their cultures so that they are more results-oriented, customer-focused, and collaborative in nature. To foster such cultures, these organizations recognize that an effective performance management system can be a strategic tool to drive internal change and achieve desired results. Public sector organizations both in the United States and abroad have implemented a selected, generally consistent set of key practices for effective performance management that collectively create a clear linkage— “line of sight”—between individual performance and organizational success. These key practices include the following. 1. Align individual performance expectations with organizational goals. An explicit alignment helps individuals see the connection between their daily activities and organizational goals. 2. Connect performance expectations to crosscutting goals. Placing an emphasis on collaboration, interaction, and teamwork across organizational boundaries helps strengthen accountability for results. 3. Provide and routinely use performance information to track organizational priorities. Individuals use performance information to manage during the year, identify performance gaps, and pinpoint improvement opportunities. Based on previously issued reports on public sector organizations’ approaches to reinforce individual accountability for results, GAO identified key practices that federal agencies can consider as they develop modern, effective, and credible performance management systems. 4. Require follow-up actions to address organizational priorities. By requiring and tracking follow-up actions on performance gaps, organizations underscore the importance of holding individuals accountable for making progress on their priorities. 5. Use competencies to provide a fuller assessment of performance. Competencies define the skills and supporting behaviors that individuals need to effectively contribute to organizational results. 6. Link pay to individual and organizational performance. Pay, incentive, and reward systems that link employee knowledge, skills, and contributions to organizational results are based on valid, reliable, and transparent performance management systems with adequate safeguards. 7. Make meaningful distinctions in performance. Effective performance management systems strive to provide candid and constructive feedback and the necessary objective information and documentation to reward top performers and deal with poor performers. www.gao.gov/cgi-bin/getrpt?GAO-03-488. 8. Involve employees and stakeholders to gain ownership of performance management systems. Early and direct involvement helps increase employees’ and stakeholders’ understanding and ownership of the system and belief in its fairness. To view the full report, including the scope and methodology, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov. 9. Maintain continuity during transitions. Because cultural transformations take time, performance management systems reinforce accountability for change management and other organizational goals. There is widespread agreement that the federal government faces a range of challenges in the 21 century that it must confront to enhance performance, ensure accountability, and position the nation for the future. Federal agencies will need the most effective human capital systems to address these challenges and succeed in their transformation efforts during a period of likely sustained budget constraints. Forum participants discussed (1) Should there be a governmentwide framework for human capital reform? and (2) If yes, what should a governmentwide framework include? More progress in addressing human capital challenges was made in the last 3 years than in the last 20, and significant changes in how the federal workforce is managed are underway. There was widespread recognition that a “one size fits all” approach to human capital management is not appropriate for the challenges and demands government faces. However, there was equally broad agreement that there should be a governmentwide framework to guide human capital reform built on a set of beliefs that entail fundamental principles and boundaries that include criteria and processes that establish the checks and limitations when agencies seek and implement their authorities. While there were divergent views among the participants, there was general agreement that the following served as a starting point for further discussion in developing a governmentwide framework to advance needed human capital reform. On April 14, 2004, GAO and the National Commission on the Public Service Implementation Initiative hosted a forum with selected executive branch officials, key stakeholders, and other experts to help advance the discussion concerning how governmentwide human capital reform should proceed. To view the full product, including the scope and methodology, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
At the center of any agency transformation, such as the one envisioned for the Department of Homeland Security (DHS), are the people who will make it happen. Thus, strategic human capital management at DHS can help it marshal, manage, and maintain the people and skills needed to meet its critical mission. Congress provided DHS with significant flexibility to design a modern human capital management system. DHS and the Office of Personnel Management (OPM) have now jointly released the final regulations on DHS's new human capital system. Last year, with the release of the proposed regulations, GAO observed that many of the basic principles underlying the regulations were consistent with proven approaches to strategic human capital management and deserved serious consideration. However, some parts of the human capital system raised questions for DHS, OPM, and Congress to consider in the areas of pay and performance management, adverse actions and appeals, and labor management relations. GAO also identified multiple implementation challenges for DHS once the final regulations for the new system were issued. This testimony provides preliminary observations on selected provisions of the final regulations. GAO believes that the regulations contain many of the basic principles that are consistent with proven approaches to strategic human capital management. For example, many elements for a modern compensation system--such as occupational clusters, pay bands, and pay ranges that take into account factors such as labor market conditions--are to be incorporated into DHS's new system. However, these final regulations are intended to provide an outline and not a detailed, comprehensive presentation of how the new system will be implemented. Thus, DHS has considerable work ahead to define the details of the implementation of its system and understanding these details is important in assessing the overall system. The implementation challenges we identified last year are still critical to the success of the new system. Also, DHS appears to be committed to continue to involve employees, including unions, throughout the implementation process. Specifically, according to the regulations, employee representatives or union officials are to have opportunities to participate in developing the implementing directives, hold four membership seats on the Homeland Security Compensation Committee, and help in the design and review the results of evaluations of the new system. Further, GAO believes that to help ensure the quality of that involvement, DHS will need to ensure sustained and committed leadership. A Chief Operating Officer/Chief Management Officer or similar position at DHS would serve to elevate, integrate, and institutionalize responsibility for this critical endeavor and help ensure its success by providing the continuing, focused attention needed to successfully complete the multiyear conversion to the new human capital system. DHS will need to establish an overall communication strategy. According to DHS, its planned communication strategy for its new human capital system will include global e-mails, satellite broadcasts, Web pages, and an internal DHS weekly newsletter. A key implementation step for DHS is to assure an effective and on-going two-way communication effort that creates shared expectations among managers, employees, customers, and stakeholders. While GAO strongly supports human capital reform in the federal government, how it is done, when it is done, and the basis on which it is done can make all the difference in whether such efforts are successful. GAO's implementation of its own human capital authorities, such as pay bands and pay for performance, could help inform other organizations as they design systems to address their human capital needs. The final regulations for DHS's new system are especially critical because of the potential implications for related governmentwide reforms.
FDIC was created in 1933 in response to the thousands of bank failures that occurred in the 1920s and early 1930s. FDIC’s mission is to maintain the stability and public confidence in the U.S. financial system by insuring depositor accounts in banks and thrifts, examining and supervising financial institutions, and managing receiverships. Currently, FDIC insures individual accounts at insured institutions for up to $100,000 per depositor and up to $250,000 for certain retirement accounts. FDIC says that since the start of its insurance coverage in January 1934, depositors have not lost any insured funds to a bank failure. Today, FDIC’s obligations are considerable—as of September 2006, 8,743 insured U.S. institutions held $6.47 trillion in domestic deposits, of which an estimated 63.2 percent, or $4.09 trillion, were insured. To protect depositors, FDIC held insurance reserves of $50 billion, as of September 2006. FDIC directly supervises about 5,237 banks and thrifts, more than half of the institutions in a banking system jointly overseen by four federal regulators. By assets, however, FDIC-supervised institutions account for only 18.1 percent of the industry. Banks and thrifts can receive charters from the states or from the federal government; state-chartered banks may elect to join the Federal Reserve System. FDIC’s role as the primary federal regulator is for banks chartered by the states that are not members of the Federal Reserve System. In addition, FDIC is the back-up supervisor for insured banks and thrift institutions that are either state-chartered institutions or are under the direct supervision of one of the other federal banking regulators. FDIC receives no congressional appropriations; it receives funds from premiums that banks and thrift institutions pay for deposit insurance coverage and from earnings on investments in U.S. Treasury securities. FDIC’s five board members (known as directors) manage the agency. FDIC’s chairman manages and directs the daily executive and administrative operations of the agency. The chairman also has the general powers and duties that the chief executive officer for a private corporation usually has, even though FDIC is a federal government agency. Executive and senior FDIC staff report to the chairman directly or indirectly through the Deputy to the Chairman and Chief Operating Officer, or the Deputy to the Chairman and Chief Financial Officer; no other board director has similar authority or responsibility within the agency. The President appoints three of the members, two of whom he designates as the board’s chairman and vice chairman. The other two members, the Comptroller of the Currency and the Director of the Office of Thrift Supervision, serve as ex-officio board members. The three members directly appointed to FDIC’s board are often referred to as inside board directors, while the other two are referred to as outside board directors. FDIC operates principally through three divisions: the Division of Supervision and Consumer Protection, which supervises insured institutions and is responsible for promoting compliance with consumer protection, fair lending, community reinvestment, civil rights, and other laws; the Division of Insurance and Research, which assesses risks to the insurance fund, manages FDIC’s risk-related premium system, conducts banking research, publishes banking data and statistics, analyzes policy alternatives, and advises the board of directors and others in the agency; and the Division of Resolutions and Receiverships, which handles closure and liquidation of failed institutions. Other divisions include the Division of Administration, the Division of Finance, the Legal Division, and the Division of Information Technology (see fig. 1). FDIC currently employs about 4,500 people throughout 6 regional offices, 2 area offices, and 85 field offices that are geographically dispersed, with centralized operations in Washington, D.C. Following the resolution of the banking crisis of the 1980s and early 1990s, FDIC significantly reduced its workforce—down by about 80 percent, from a peak of about 23,000 employees in 1991 to about 4,500 employees as of June 2006. This trend is illustrated in figure 2. A significant portion of the reductions were staff in FDIC absorbed from the former Resolution Trust Corporation (RTC). FDIC’s downsizing generally reduced jobs across the agency, and some occupational categories experienced sizeable reductions in staff. For example, the attorney workforce decreased by 83 percent, from 1,452 attorneys in 1992 to 249 attorneys in 2005. The composition of FDIC’s examination staff also experienced significant change. Although there was a 35 percent decrease in the number of examiners (from 3,305 in 1992 to 2,157 in 2005), the percentage of FDIC’s workforce devoted to examinations increased, from 15 percent in 1992 to 47 percent for 2005. Like other federal banking regulators, FDIC is generally required to conduct full-scope, on-site examinations of institutions it directly supervises at least annually, although it can extend the interval to 18 months for certain small institutions. FDIC’s downsizing activities also resulted in a loss of institutional knowledge and expertise, and FDIC will have to replace a significant percentage of its current, highly experienced executive and management staff due to projected retirements over the next 5 years. An estimated 8 to 16 percent of FDIC’s remaining permanent workforce is projected to retire over the next 5 years. In some FDIC divisions, projected retirements are almost double these percentages. FDIC’s board of directors has a mix of knowledge and skills that contribute diverse perspectives in the board’s decision making, and the board relies on communication with deputies and senior management within FDIC to provide timely and useful information for effective and informed decision making. The board has also established standing committees to conduct certain oversight functions, such as monitoring the implementation of audit report recommendations, to help manage the agency. Further, FDIC’s board of directors has the ability to broadly delegate its authority to allow the agency to operate efficiently. These delegations are extensive and have been reviewed periodically to ensure they are appropriate for FDIC’s current size and structure, and the current banking environment. The literature we reviewed on best practices for boards of directors states that the composition of the board should be tailored to meet the needs of the organization, but there should also be a mix of knowledge and skills. FDIC’s board of directors reflects a mix of knowledge, perspectives, and political affiliations; for example, FDIC’s board includes the directors of the Office of the Comptroller of the Currency and Office of Thrift Supervision as well as a director with experience in state bank supervision. Further, after February 28, 1993, no more than three of the members of the board of directors could be members of the same political party. According to FDIC board members, each director provides a different perspective that contributes to board diversity. Additionally, officials told us that the presence of the outside directors on the board helps to represent the views of their respective agencies during joint rule making. Senior FDIC officials and board directors agreed that the board functions best with a full complement of directors. Vacancies on the board could result in the board not benefiting from the perspectives of a full complement of directors. Board members told us that without a full complement, there would be fewer ideas and opinions during board deliberations. For example, one board member stated that the possible absence of a member with state bank supervisory experience might affect discussions on state banks. However, FDIC board members told us that board vacancies would not negatively affect the daily operation of the agency. According to our standards for internal control, effective communications should occur in a broad sense with information flowing down, across, and up the organization. The literature we reviewed related to best practices for boards of directors suggests that boards need quality and timely information to help them obtain a thorough understanding of important issues. The literature states that board members should receive information through formal channels, such as management reports and committee meetings, and informal channels, such as phone or e-mail discussions. FDIC directors told us that board members are fully aware of and familiar with operations at the agency, frequently communicating and interacting with senior management and staff on a broad range of issues. For example, board directors told us they have regular meetings with various division managers to discuss agency issues. We also observed a November 2006 board meeting, where it appeared from the board members’ few questions and supportive comments to the FDIC staff that the board members were informed of the staff’s recommendations. Directors explained that there is a free flow of information between directors and FDIC senior management and staff as well as between directors and the board chairman. Each director also has a deputy who assists him or her in carrying out his or her duties and responsibilities. With the assistance of their deputies, outside (ex-officio) directors are able to remain engaged in pertinent issues at FDIC. The deputies also assist directors in examining diverse policy issues of concern to the agency, either initiated by the director, or at the request of the chairperson. Also, FDIC management provides the bulk of information that directors receive to make decisions. For example, FDIC management provides briefings to board directors on various issues as well as detailed briefing books in advance of FDIC board meetings so that directors may ask questions or request more information to prepare to provide input and make decisions at board meetings. In one May 2006 board meeting, we observed FDIC staff making brief presentations to the board highlighting various trends and factors that they considered in developing recommended action or inaction for several agenda items. We also reviewed board meeting agendas that outlined substantive issues considered by the board of directors. In one example, the Director of the Division of Supervision and Consumer Protection provided a detailed written overview of a notice of proposed rule making informing board members weeks before the official board meeting. Further, directors told us that informal communication with their deputies, other board members, and senior management occurs through phone conversations, e-mail discussions, and impromptu meetings. FDIC’s board of directors established standing committees to conduct certain oversight functions that assist it in managing the agency. The board provides authority to these committees to act on certain matters or to make recommendations to the board of directors on various matters presented to it. Currently, the board has four standing committees: (1) Case Review Committee, (2) Supervision Appeals Review Committee, (3) Assessment Appeals Committee, and (4) Audit Committee. Each committee is governed by formal rules that cover areas such as membership, functions and duties, and other process and reporting requirements such as frequency and scope of committee meetings and, in some cases, submission of activity reports to the board. The Case Review Committee is comprised of six members who adopt guidelines for taking enforcement actions against individuals, for example, to remove an individual from participating in the affairs of an insured depository institution. Under authority granted to it by the board of directors, this committee also reviews and approves the initiation of certain enforcement actions upon determination by a designated representative of the Division of Supervision and Consumer Protection or upon request by the chair of the committee. The Supervision Appeals Review Committee, comprised of four members, considers and decides appeals of material supervisory determination made by FDIC -supervised institutions; for example, an institution may appeal a rating in its report of examination. The Assessment Appeals Committee is a six-member committee that considers and decides appeals regarding assessments to insured depository institutions. As an appellate entity, the committee is responsible for making final determinations pursuant to regulations regarding the assessment risk classification and the assessment payment calculation of insured depository institutions. Last, the Audit Committee is comprised of three members who are charged with reviewing reports of completed audits and requesting necessary follow-up on the audit recommendations. The committee also oversees the agency’s financial reporting and internal controls, including reviewing and approving plans for compliance with the audit and financial reporting provisions applicable to government corporations, assessing the sufficiency of FDIC’s internal control structure, and ensuring compliance with applicable laws, regulations, and internal and external audit recommendations, all for the purpose of rendering advice to the chairman of the board of directors. The literature we reviewed on recommended practices for boards of directors of publicly traded corporations states that audit committees play a critical role in the board oversight process. In most publicly traded corporations, the primary role of an audit committee of its board of directors is oversight of the preparation and filing of financial statements with the appropriate regulators and exchanges. However FDIC’s board directors and officials told us that FDIC’s Audit Committee does not serve the same function as an audit committee of a private sector corporation. FDIC’s Audit Committee is an advisory body that, in practice, conducts a more limited scope of duties than what is authorized in its formal rules. Further, as stated above, FDIC is subject to certain audit and financial reporting provisions. FDIC’s board has established the position of chief financial officer as FDIC’s chief financial, accounting and budget officer. Although FDIC is not subject to title II of the Chief Financial Officers Act of 1990 (CFO Act), which requires 24 executive agencies to appoint chief financial officers, FDIC’s chief financial officer’s duties include implementing programs consistent with the CFO Act. Thus, FDIC’s Audit Committee’s responsibilities do not include oversight of the preparation and filing of financial statements and other activities generally conducted by private sector audit committees. Instead, FDIC’s Audit Committee’s primary responsibility is ensuring that the recommendations of FDIC’s Inspector General are appropriately implemented. Also, section 301 of the Sarbanes Oxley Act requires audit committees of publicly traded corporations to be composed entirely of independent members. Although FDIC is not bound by these requirements, according to FDIC officials, Audit Committee members are considered independent of FDIC management because they do not have direct responsibility over any FDIC division or office. However, in one instance, FDIC revised the composition of the Audit Committee because there was a perception of impairment to independence. FDIC’s Chief Financial Officer was a member of the Audit Committee because this official was also a deputy to the chairman and therefore eligible for the senior employee position on the Audit Committee. However, FDIC thought it was inappropriate to have the Chief Financial Officer serve on the Audit Committee because certain Audit Committee functions—reviewing materials related to FDIC’s finances, for example—may have had the potential to conflict with the professional interests of the Chief Financial Officer. FDIC officials stated that interactions between FDIC’s Inspector General and the Audit Committee also help mitigate concerns about impairments to independence and conflicts of interest. For example, officials from FDIC’s Office of the Inspector General can attend Audit Committee meetings. Audit Committee members noted that they valued the insights provided by officials from the Office of the Inspector General because they have an opportunity to weigh in on instances where the Audit Committee may not be able to sufficiently distance itself in order to provide objective oversight. FDIC’s board of directors delegates much of the agency’s operational responsibilities to various committees and offices within FDIC. These delegations allow the board to concentrate on policy matters as opposed to daily agency operations. FDIC’s current delegations of authority were influenced by prior events that necessitated broad delegations. According to an FDIC official, very few activities were initially delegated to FDIC staff. However, during the banking crisis of the 1980s and early 1990s when FDIC resolved many institutions, there were significant transfers of authority from the board to divisional personnel. During that period, FDIC had over 20,000 employees and the need for sweeping delegations was appropriate for the size of the agency and the industry’s conditions. The board was overwhelmed with making decisions stemming from the agency’s increased workload and decided to delegate many routine matters to FDIC staff. However, there are some activities that the board cannot delegate. For example, only the board can decide to deny an application for deposit insurance, terminate deposit insurance, or take enforcement actions using the board’s backup authority. According to our Standards for Internal Control in the Federal Government, conscientious management and effective internal controls are affected by the way in which the agency delegates authority and responsibility throughout the organization. An agency’s delegations should cover authority and responsibility for operating activities, reporting relationships, and authorization protocols. Once the board has a full understanding of an issue, it may allow others to make decisions concerning that issue through delegations. FDIC officials explained that delegations of authority are documented, and there are associated reporting requirements. Further, FDIC has procedures for issuing, reviewing, and amending delegations of authority within FDIC divisions and offices. Once delegations of authority have been issued by the board, officials who are recipients of those delegations are to observe an FDIC directive in properly redelegating their authority. The February 2004 directive to all FDIC divisions and offices formalizes policies and procedures for issuing delegations of authority throughout the agency and applies to all delegations issued by the board as well as redelegations and subdelegations to FDIC managers, supervisors, and other staff. According to the directive, the headquarters division or office issuing a delegation of authority is to prepare its delegations, including any revisions, in coordination with FDIC’s Legal Division and submit the delegations to FDIC’s executive secretary. Further, according to the directive, the divisions are to review delegations at least once a year for accuracy. After each review, the Executive Secretary Section of FDIC’s Legal Division is to review submitted delegations for completeness and compile any revisions to the delegations. The Executive Secretary Section should also track board and other FDIC management activity, for example corporate reorganizations and title changes, to ensure that the delegations of authority fully reflect these changes. We reviewed FDIC documents that track delegations of authority related to the processing of financial institution applications, for example, applications to engage in real estate investment activities. The document indicates changes in or clarifications of delegations of authority from existing delegation guidance. Furthermore, the Executive Secretary Section is to regularly monitor, issue periodic notices, and follow up, if necessary, with senior-level officials to ensure that all divisions and offices comply with established procedures and deadlines for FDIC headquarters delegations of authority. FDIC officials told us that the annual reviews required by the directive are undertaken to assess the technical conformity and consistency of delegations. Although the directive only requires an annual review, FDIC officials stated that in practice, the Executive Secretary Section works with FDIC’s divisions and offices on a continuous basis to ensure delegations are complete, consistent, and comply with standard procedures. The officials added that divisions appreciate having a standard format for issuing and documenting delegations. In addition to the periodic reviews required by the directive, FDIC has broadly reviewed its delegations of authority on other occasions. One broad review of its delegations occurred during 1995 to 1997, after the banking crisis and the merger with the Resolution Trust Corporation, which resulted in a significant reduction in staff. A corporate delegation task force was assembled to review existing delegations, comment on them, and make recommendations on how they could be improved. The scope of the review was intended to encompass all aspects of FDIC’s delegations, from those governing internal management and administration to those governing how FDIC accomplished its mission. An FDIC official noted that it was vital that the agency have logical, well- reasoned delegations of authority and that they be kept current, which underlined the basis of the task force’s work. FDIC’s Office of the Executive Secretary (currently the Executive Secretary Section) coordinated the review of delegations by the board of directors and the development of recommendations for changes that would reduce processing time, empower employees, and promote accountability. FDIC also completed a broad review of its delegations in 2002. At the time, FDIC rescinded a series of delegations that were previously codified in the Code of Federal Regulations in favor of adopting a board resolution that contained a master set of delegations. This format made modifying the delegations more efficient. During the consolidation process, FDIC made several changes to certain delegations, for example, delegations related to FDIC’s receivership activities were amended to streamline the process for approving receivership-related actions. There were also occasions that necessitated the reexamination of specific delegations. According to a senior FDIC official, any board member has the right to request a review of any delegated authority. The officials stated that it is not uncommon for a newly appointed chairman to review existing delegations of authority to ensure they are aligned with his or her vision and management style. In one recent instance, delegations related to the processing of industrial loan corporation applications were rescinded and a 6 month moratorium implemented to allow the agency, upon the request of the current chairman, the opportunity to examine developments related to these specialized institutions. Further, the official stated that FDIC divisions and offices can request a review of their delegations of authority. As noted earlier, technical changes to the delegations covered by the directive, such as position titles and division names, are typically handled between the Executive Secretary Section and the divisions. However, officials explained that the board would be informed of more substantive issues that would require a board vote. In most instances, the request for a review is related to a delegation that is outdated or needs clarification. The board reviews the request and any relevant information and votes to amend or rescind delegations. Although FDIC has a process for making substantive changes to delegations, instances may arise that prompt the need for specific reviews of delegations that are perceived as vague or ambiguous. For example, a 2006 FDIC Inspector General’s report found a lack of clarity as to whether the board could delegate the calculation of the reserve ratio to FDIC officials. According to the report, the nature, timing, and application of a new method for estimating certain insured deposits could have had a significant impact on the deposit insurance fund’s reserve ratios. The report concluded that the delegations to the Director of the Division of Insurance and Research established an expectation that the Director should communicate and advise the board on financial matters of importance to the agency and the banking industry. However, the report found that communication between the FDIC board and deputies on the issue of estimated insurance deposit allocations was limited, and FDIC staff should have more fully involved the board in the decision of whether and how to apply a new method for estimating certain insured deposits. The report recommended a review of the agency’s existing bylaws, specifically, the powers and duties delegated to the Chief Financial Officer and to the Directors of the Division of Finance and the Division of Insurance and Research, to ensure that those delegations reflect the board’s intent and expectation for the deposit insurance fund reserve ratio and assessment determination process. The report recommended that FDIC review its delegations related to the assessment determination process to determine whether the delegations needed to be clarified or modified. In response to the Inspector General’s recommendations, FDIC is currently reviewing specific delegations of authority. As of December 2006, a senior FDIC official was in the process of preparing a proposal to present to the Audit Committee outlining the details of the review. FDIC has strengthened its human capital framework and uses an integrated approach to align its human capital strategies with its mission and goals. For example, interdivisional decision making, where senior executives come together with division managers and staff from mission support divisions, is a key component of FDIC’s human capital strategy for ensuring functional alignment of its mission critical work. Using this integrated approach, FDIC created the Corporate Employee Program to provide a flexible workforce and to train new employees in multiple FDIC divisions. However, the program’s effects on mission critical functions are unknown and contributions to specific job tasks may take a number of years to realize. FDIC’s Corporate University, the agency’s training and development division, evaluates all of its training programs—including the Corporate Employee Program—and is currently implementing a scorecard to measure its progress toward meeting its human capital goals. The scorecard currently includes an output performance measure for the Corporate Employee Program; however, FDIC has not developed outcome-based performance measures that will assist it in determining whether its key training and development programs are effective. Without such measures, FDIC will not be able to determine how effective its training and development initiatives are in assisting the agency to achieve its mission and human capital goals. Effective management of human capital, where the workload can shift dramatically depending on conditions in the economy and the banking industry, is critical at FDIC. Therefore, FDIC has taken a number of steps to strengthen and institutionalize certain elements of its human capital framework. FDIC established a Human Resources Committee to help the agency integrate human capital approaches into its overall mission planning efforts. It also established the Corporate University, an employee training and development division that aligns agency needs with learning and development. Finally, in response to an FDIC Inspector General report, the agency developed a human capital blueprint that describes the key elements of its human capital framework. FDIC established its Human Resources Committee in 2001 to integrate strategic human capital management into the agency’s planning and decision making processes. The committee, consisting of members from several divisions across the agency, focuses on developing and evaluating human capital strategies with agencywide impact. The committee also coordinates FDIC’s human capital planning process. In June 2004, FDIC approved a formal charter for the committee to ensure that future leaders and stakeholders continue the committee’s work. The committee’s charter describes its purpose, functions, responsibilities, and composition. FDIC’s Chief Human Capital Officer serves as chair of the Human Resources Committee. FDIC appointed the Chief Human Capital Officer (CHCO) to align the agency’s human capital policies and programs to the agency’s mission, goals, and outcomes. Because FDIC’s Human Resources Committee brings together executives in the major divisions and personnel in support divisions, it is able to develop approaches for accomplishing the agency’s mission and goals. Our prior work on strategic human capital planning has shown that effective organizations integrate human capital approaches into their efforts for accomplishing their missions and goals. Such integration allows an agency to ensure that its core processes efficiently and effectively support its mission. In April 2003, we reported that establishing entities, such as human capital councils like FDIC’s Human Resources Committee, was a key action agencies could take to integrate human capital approaches with strategies for achieving their missions. Comprised of senior agency officials, including both program leaders and human capital leaders, these human capital councils meet regularly to review the progress of their agency’s integration efforts and to make certain that the human capital strategies are visible, viable, and remain relevant. Additionally, the groups help the agencies monitor whether differences in human capital approaches throughout the agencies are well considered, effectively contribute to outcomes, and are equitable in their implementation. In this regard, FDIC’s Human Resources Committee (HRC) brings together the support functions of FDIC’s Division of Administration (DOA), Division of Finance (DOF), Legal Division, Division of Information and Technology (DIT), and Corporate University (CU) with executives from the major line divisions—Division of Supervision and Consumer Protection (DSC), Division of Insurance and Research (DIR), and Division of Resolutions and Receiverships (DRR). See figure 3. The committee members stated that having representatives from various divisions within the agency allows them to integrate all views into the decision-making process. The committee meets weekly, typically for 2 or 3 hours and works to facilitate communication and consensus throughout FDIC on human capital issues. The committee also advises senior leadership on significant human resources issues. Human Resources Committee members told us that they review policy recommendations and share information with their respective division directors. Further, committee members stated that division managers are able to bring the concerns of their subordinate staff to the committee, and managers are able to notify their subordinate staff of human capital initiatives that may address their concerns. For example, staff members are able to communicate training needs to the Human Resources Committee through their division managers. The division representatives on the Human Resources Committee are able to communicate information to the managers about future training programs that would meet staff needs. According to committee members, this helps facilitate the flow of information to and from division managers and subordinate staff. Another step FDIC took to strengthen its human capital framework was establishing its Corporate University in 2003. Corporate University supports the agency’s mission and goals by training and developing FDIC employees. Corporate University provides training and development opportunities for FDIC executives, managers, supervisors, and employees in order to help them enhance their job performance. Before establishing Corporate University, FDIC focused and confined training within divisions; the agency gave relatively little attention to building a corporate culture or making employees aware of activities outside their own divisions. However, since establishing Corporate University, FDIC’s efforts have been lauded for reflecting best practices in aligning training functions with the agency’s mission and goals. In 2005, FDIC’s Corporate University received an excellence award from the Corporate University XChange for its organizational structure and alignment within the agency. The Corporate University XChange cited features of FDIC’s Corporate University that made it appropriately aligned within the agency, such as the existence of a Governing Board that includes division managers and having deans and chairs from the divisions serve on a rotational basis. FDIC’s Corporate University works with the Human Resources Committee, the Corporate University’s Governing Board, and deans to design curriculum and implement training programs. The structure of Corporate University is intended to support a balance between the agency’s goals and the needs of the individual divisions. FDIC’s Chief Operating Officer, Chief Financial Officer, and division Directors work with the Chief Learning Officer to deliver training and development programs. Corporate University also has structures in place to facilitate the exchange of information related to the training needs of the Division of Supervision and Consumer Protection. Two committees—the Curriculum Oversight Group and the Training Oversight Committee—assist Corporate University in identifying training and development needs. The Curriculum Oversight Group consists of midlevel supervisors who meet with Corporate University staff to map out training needs and curriculum changes that require focused strategies. The Training Oversight Committee consists of senior managers who provide information on skills needed within the Division of Supervision and Consumer Protection. Last, in response to a 2004 FDIC Inspector General audit report, the agency established an integrated human capital blueprint in December 2004. The report recommended that FDIC develop a coherent human capital blueprint that comprehensively describes the agency’s human capital framework and establishes a process for agency leaders to monitor the alignment and success of human capital initiatives. The report noted that such a blueprint would be beneficial because it would, among other things, promote an agencywide understanding of the human capital program. According to FDIC officials, the blueprint describes the key elements of FDIC’s human capital framework and recognizes the collective responsibility of various FDIC divisions and offices in the success of its strategic human capital initiatives. Figure 4 illustrates FDIC’s human capital blueprint. Our previous work on strategic human capital planning suggests that human capital professionals and line managers should share accountability for integrating human capital strategies into the planning and decision-making processes. Our work further states that successful organizations have human capital professionals work with agency leaders and managers to develop strategic and programmatic plans to accomplish agency goals. This process results in agency and human capital leaders sharing accountability for successfully integrating strategic human capital approaches into the planning and decision making of the agency. FDIC’s human capital blueprint includes processes for agency leaders to participate in the alignment of the agency’s human capital initiatives relative to its goals. The blueprint considers how major environmental factors, such as the economy and the banking industry, impact the agency’s mission and goals. FDIC considers these external factors when it conducts assessments of workload and skill requirements. These assessments ultimately guide the FDIC’s Human Resources Branch, the Human Resources Committee, and Corporate University in developing and implementing initiatives to address human capital needs. A key part of FDIC’s human capital strategy is the Corporate Employee Program, which cross trains employees in multiple FDIC divisions with the objective of training them to respond rapidly to shifting priorities and changes in workload. According to FDIC officials, the Corporate Employee Program reflects a more collaborative approach to meeting mission critical functions. Launched in June 2005, the Corporate Employee Program provides opportunities for employees at all levels to identify, develop, and apply various skills through training opportunities and work assignments. According to FDIC memoranda describing the program, the increased speed at which changes can occur in individual insured institutions and the entire financial industry, and hence the speed at which FDIC’s workload can change, requires FDIC to ensure that it can respond effectively and quickly. The memoranda further state that cross-training programs and cross-divisional mobility will provide FDIC employees with broader career experiences and enhanced job satisfaction while allowing FDIC to have more than enough people within the organization who have the essential training and experience that FDIC may need to respond to significant events. The goals of the Corporate Employee Program are to: provide employees with skills needed to address significant spikes in workloads that may temporarily require shifting resources among FDIC’s three main divisions, promote a corporate perspective and a corporate approach to problem facilitate communication and the transfer of knowledge across all FDIC divisions, and foster greater career opportunity and job satisfaction. In March 2005, FDIC began pursuing three initial strategies for implementing the Corporate Employee Program: a crossover program, voluntary rotational assignments, and new hiring. The voluntary crossover program, intended to integrate key skill sets across business lines, allows FDIC staff in the Division of Resolutions and Receiverships to apply for in- service training in the Division of Supervision and Consumer Protection which will require that they obtain commissioned examiner status within a specific time frame. The voluntary rotational assignments provide current examiners in the Division of Supervision and Consumer Protection an opportunity to fulfill a more well-defined role in providing support to the Division of Resolutions and Receiverships. To fulfill this role, a number of examiners receive training and practical experience in resolutions and receivership functions. In the event of a significant increase in resolutions workload, the Division of Resolutions and Receiverships has first priority to call on these specialists when needed. FDIC has also developed criteria for hiring and training new employees in certain divisions. The divisions hire new employees to pursue commissioned examiner status in either risk management or compliance. While pursuing the commissioned examiner status, new employees simultaneously receive training in resolution and receivership functions and an enhanced orientation on the broad scope of FDIC’s operations. Those who successfully complete the program are eligible to compete for available permanent positions in FDIC’s three major career tracks—risk management examiners, compliance examiners, and resolutions and receiverships specialists. FDIC employees whom we spoke with told us that they believe the Corporate Employee Program holds great potential. For example, regional and field office staff told us that the program provides new employees with a better understanding of how the various FDIC divisions work together and an overview of each division’s role within the agency. Regional and field office employees also stated that the program will make FDIC a better agency because the program helps to create a well-rounded and resourceful workforce that can be called upon to assist in the event of a banking crisis. However, FDIC staff in the regional and field offices we visited expressed a variety of concerns about the way the Corporate Employee Program operates. For example, we were told that contributions from graduates of the Corporate Employee Program may take a number of years to realize. Regional and field staff explained that the commissioning process for examiners takes 4 years to complete. Therefore, the earliest successful Corporate Employee Program graduates could contribute to bank examinations would be 4 years from the time they began the program. For example, in one field office an employee explained that examiners cannot certify an institution’s examination report until after they have received their commissions. Therefore, current Corporate Employee Program participants are unable to reduce the workload of the commissioned examiners until then. However, according to FDIC officials in headquarters, examiners hired into the Corporate Employee Program can contribute immediately and continuously to the completion of certain aspects of a bank examination during their training and development program, which culminates in attaining a “commissioned” status. FDIC headquarters officials also stated that while the expected commissioning time frame is approximately 4 years, they believe they are preparing a more capable future workforce. They explained that the Corporate Employee Program adds approximately 6 to 9 months to the commissioning process, while simultaneously accelerating new employees’ understanding of FDIC’s division functions and how they are interrelated. Regional and field staff we spoke with also stated that reduced staffing levels place greater strain on existing staff to train new employees in certain divisions, which is further amplified by their concerns about the nature and timing of the rotational aspect of the program. Although regional and field office staff thought rotations were beneficial, they expressed concern that new employees do not spend enough time in each division to fully grasp how to perform certain job duties. Also, cross- divisional rotations during the first year can hinder the program, according to regional and field office staff. Specifically, regional and field staff stated they have had to re-train new employees because they had forgotten certain skills by the time they were permanently placed in a specific area after their rotations were complete. Further, regional office employees suggested that the rotation in the Division of Resolutions and Receiverships be shortened in order for the agency to be more proactive in addressing any increase in troubled or failed banks. They stated that new employees would benefit more from gaining experience in ongoing supervisory activities so they are able to detect problems in banks, as opposed to being trained on resolving banks. Further, regional office staff indicated that the agency was giving a priority to placing new employees in the examiner commissioning tracks because that was where the agency had focused its hiring efforts; therefore, a lengthy rotation in the Division of Resolutions and Receiverships could be counterproductive, especially given the reduced staff available for training new employees. Officials in one regional office we visited stated that new employees rotating through the Division of Resolutions and Receiverships are not receiving detailed training because the agency’s greatest need is currently for examiners. Further, in the event of an increase in troubled or failed banks, the Division of Resolutions and Receiverships would be more likely to pull more experienced employees from other divisions, not new employees. FDIC headquarters officials stated they have always relied on seasoned examiners to provide on-the-job training and guidance to new examiners. The on-the-job training represents a critical component of the commissioning process and is considered a program strength. The officials added that on-the-job training continues under the Corporate Employee Program, but does not represent a significant increase in training burden as compared to the former examiner training practices. Further, FDIC headquarters officials stated that the first year rotations in the Corporate Employee Program were intended to create baseline functionality, awareness, and understanding of the three primary divisions, so when the employees in training subsequently pursue a commissioning path, they have the benefit of broad agency perspective and understand how the work of each division benefits the work of the others. As such, according to headquarters officials, the timing of the rotational assignments is aligned with the program’s desired outcome and intent. Last, regional and field office staff explained that the agency was not training new employees in every aspect of the examination process due to FDIC’s risk-based approach to examinations. As a result, they may not be able to identify potential problems in areas not covered by the risk-based approach. For example, we interviewed examiners in one region that has experienced significant growth in the number of financial institutions it oversees. FDIC employees in that region told us they expect the number of new bank examinations, which require full scoping, to rise over the next year, and new employees will not know how to conduct a full scope examination because they are being trained on the risk-based approach. In another office, examiners stated that new employees are typically trained in examination procedures using banks that are well-capitalized and well- managed. Therefore, FDIC may not be preparing those employees to handle rare problems that could potentially occur in banks. FDIC officials in headquarters disagreed that new employees receive less training than the previous examiner processes offered. The officials stated that the use of risk-based examination scoping processes constitute “full-scope” examinations and that examination procedures have not changed, nor have they been eliminated from examiner training programs. The Corporate Employee Program represents a significant change in the way FDIC conducts its workforce planning for the future. Our work on organizational transformations identified key practices that can serve as a basis for subsequent consideration as federal agencies seek to transform their cultures. One practice is to communicate shared expectations and report related progress, which would allow for communication to build trust and help ensure all employees receive a consistent message. Organizations undergoing significant change have found that communicating information early and often helps build an understanding of the purpose of planned changes. Also, messages to employees that are consistent in tone and content can alleviate uncertainties generated during times of large-scale change management initiatives. FDIC created brochures, provided briefings, and issued memoranda to communicate the structure and intended goals of the Corporate Employee Program. During our site visits, senior managers in one regional office stated that the development of the Corporate Employee Program was a combined effort of groups and individuals in field offices, regional offices and in headquarters. Also, some regional and field office employees stated they had opportunities to ask questions about the program as it was being developed, provided input into the development of the Corporate Employee Program or were kept abreast of developments in the program by their managers. However, other employees we met with stated they did not have an opportunity to provide input into the development of the Corporate Employee Program. The Corporate Employee Program has only recently been implemented, and differing opinions on the nature, intent, or benefits of such a new initiative may be anticipated. It is also important to note that FDIC has not had an opportunity to fully determine the potential benefits or shortfalls of the Corporate Employee Program due to the newness of the program and the relatively strong health of the banking industry. Thus, it is especially important that FDIC take steps to assess the benefits of the program and share available results with all FDIC employees. Our prior work on organizational transformations states that sharing performance information can help employees understand what the organization is trying to accomplish and how it is progressing in that direction and increase employees’ understanding and acceptance of organizational goals and objectives. As noted earlier, FDIC officials estimate that 8 to 16 percent of the agency’s remaining permanent workforce will retire over the next 5 years. Many of the agency’s most experienced and most senior employees are included in the projection, and their retirements will further exacerbate the loss of institutional knowledge that occurred during the more than 10 years of agency downsizing. In order to address this and other issues related to leadership development and improving professional competence, FDIC is developing several new human capital initiatives. In October 2006, the Corporate University Governing Board granted approval for Corporate University to proceed with the design and piloting of the Corporate Executive Development Program. FDIC officials are designing the program to address human capital issues related to succession planning. The purpose of the program is to prepare high-potential employees for executive-level responsibilities. Certain senior level employees and managers will be eligible to participate in the executive development program. Candidates will participate in an 18-month program consisting of experiential and academic learning (including a 12-month detail outside of the candidate’s current division), and a 2- or 3-month detail tailored to the candidate’s developmental needs. Candidates who successfully complete the program are eligible for noncompetitive promotion into executive manager positions at FDIC; however, there are no guarantees for placement. FDIC has also developed the following human capital initiatives to help employees develop expertise and improve professional competence: Professional Learning Accounts: The Corporate University Governing Board approved Professional Learning Accounts for implementation in 2007. These accounts are a specified annual amount of money (up to $2,500) and hours (up to 48 hours) that employees at all career levels within the agency manage with their supervisors for use toward the employee’s learning and development goals. Employees can use account funds for any training and development opportunity that is considered related to the work and mission of FDIC, regardless of the employee’s current occupation. The accounts are voluntary and temporary, permanent, full, and part time employees are eligible. Employees eligible for account funds must first complete a career development plan, which an employee’s supervisor must approve. Internal Certifications: FDIC offers additional certifications through the Corporate Employee Program as well as a commissioning track in the Division of Resolutions and Receiverships. FDIC’s new certificate programs are intended to give employees at all career levels an opportunity to expand their knowledge and skills in areas critical to FDIC’s mission while simultaneously helping to make FDIC more responsive to changes in the financial services industry. To receive a certificate, employees must complete a development program, have a supervisor attest to their skill readiness, and qualify on a knowledge assessment in the form of a computerized test or a performance assessment. FDIC expects the FDIC certificate to benefit FDIC employees in a number of ways, including broadened agency perspective, increased marketability, career mobility, personal development, and continuous learning. As of October 2006, FDIC had introduced two certificate programs, and Corporate University was working to identify and obtain evaluation data for these programs to measure their effectiveness. FDIC is also working to develop a commissioning track for resolutions and receiverships specialists. It is expected that in the future, new employees will be selected into either the examiner commissioning track or the resolutions and receiverships commissioning track. External Certifications: Corporate University has also sponsored opportunities targeted for midlevel career staff to receive external certifications in areas that align with FDIC’s business needs. In 2005, Corporate University offered two external certifications to select employees. As of November 2006, Corporate University sponsorship included four more external certifications, and Corporate University planned to continue to work with FDIC’s divisions to sponsor other external certifications, as appropriate. MBA Program: During 2005, Corporate University sponsored (on a pilot basis) a limited number of employees to pursue the Masters in Business Administration, or MBA, at the University of Massachusetts at Amherst. According to FDIC officials, the MBA program enhances the technical and leadership skills of FDIC employees. At the time of our review, FDIC had 10 employees enrolled in the first year of the program. Corporate University officials stated that they evaluate all of their training programs to determine how effective they are at providing the skills and expertise needed to improve job performance. However, certain training courses receive a more in-depth evaluation than others, depending on the significance of the training program. In March 2004, we published A Guide for Assessing Strategic Training and Development Efforts in the Federal Government, which emphasizes the importance of agencies’ being able to evaluate their training programs and demonstrate how the training efforts help develop employees and improve the agencies’ performance. One commonly accepted training evaluation model consists of five levels of assessment. The first level measures the participants’ reaction to and satisfaction with the training program. The second level measures the extent to which learning has occurred because of the training effort. The third level measures the application of the learning to the work environment through changes in behavior that trainees exhibit on the job. The fourth level measures the impact of the training program on the agency’s organizational results. Finally, the fifth level—often referred to as return on investment—compares the benefits (quantified in dollars) to the costs of the training program. According to Corporate University officials, all training programs receive a level one evaluation, which are the typical evaluations performed at the end of a course. Where appropriate, Corporate University conducts level two evaluations, which are similar to a final exam and provide a measure of how much trainees learned during the training program. More significant training programs, like the Corporate Employee Program, receive level three evaluations, where, according to Corporate University officials, employees demonstrate their learning on the job. For example, after every rotation or job assignment during the first year of the Corporate Employee Program, the employee’s supervisor prepares a report on how well the employee performed certain job tasks. Corporate University officials noted that they are planning to conduct what they consider level four evaluations of the Corporate Employee Program, where they will compare the skill level and performance of graduates of the Corporate Employee Program to those who completed the previous commissioning process. According to Corporate University officials, this might help them determine whether the Corporate Employee Program produces employees with at least the same level of knowledge, skill, and ability as those employees who were trained and commissioned prior to the implementation of the Corporate Employee Program. Our prior work on evaluating training programs states that assessing training and development efforts should consider feedback from customers, such as whether employee behaviors or agency processes effectively met their needs and expectations. Corporate University officials noted that they also obtain feedback on training courses to ensure they remain relevant, the emphasis remains appropriate to the job duties, and information being provided meets staff’s needs. Based on feedback, Corporate University may make changes to the delivery of the course or the tools used in the course. Corporate University officials stated they made significant changes to the Corporate Employee Program based on feedback from the new employees and their supervisors. For example, Corporate University made improvements to certain training materials and revised certain required benchmarks to make them more robust and complete. According to our guide, not all training and development programs require, or are suitable for, higher levels of evaluation. It can be difficult to conduct higher levels of evaluation because of the difficulty and costs associated with data collection and the complexity in directly linking training and development programs to improved individual and organizational performance. Corporate University officials noted that they try to focus higher levels of evaluation on the most significant training programs that address key organizational objectives, involve change management, and are costly to the organization. For example, Corporate University is planning to conduct level four evaluations of the Corporate Employee Program because it is significant, costly, and highly visible. Officials added that resources are the biggest obstacle to conducting higher levels of evaluation. For example, it takes time to complete surveys and questionnaires and obtain productivity data. Officials told us that conducting these activities interrupts core mission work, so Corporate University conducts higher levels of evaluation in a more targeted fashion. Corporate University is currently developing a scorecard to measure its progress in meeting its human capital goals, but it has not fully developed outcome-based performance measures to determine the effectiveness of its training programs. Performance measures may address the type or level of program activities conducted (process), the direct products and services delivered by a program (outputs), or the results of those products and services (outcomes). Corporate University’s scorecard development began in early spring 2005, when an FDIC management analyst briefed Corporate University on the scorecard concept and began developing a strategy for the development of the scorecard. By fall 2005, Corporate University had developed a draft scorecard and presented it to staff; Corporate University began piloting the draft scorecard in 2006. Corporate University’s draft scorecard includes indicators that measure customer perspective (e.g., percent of target Corporate Employee Program certificates awarded); internal perspective (e.g., percent of clients satisfied on post-project surveys); Corporate University operating attributes (e.g., percent of projects on schedule or completed on time); and financial perspective (e.g., percent of resources invested in high-priority areas). While Corporate University conducts evaluations to learn the benefits of its training programs and how to improve them, our prior work on performance measurement and evaluation shows that evaluations typically examine a broader range of information than is feasible to monitor on an ongoing basis. Though evaluations may present this challenge, FDIC can monitor outcome-based performance measures on an ongoing basis to help focus on whether a program has achieved its objectives. Both evaluations and performance measurements aim to support resource allocation and other decisions to improve effectiveness; however, performance measurement, because of its ongoing nature, can serve as an early warning system to FDIC management and can be used as a vehicle for improving accountability. Our prior work on strategic workforce planning states high performing organizations recognize the importance of measuring how outcomes of human capital strategies help the organization accomplish its mission. Performance measures, appropriately designed, can be used to gauge two types of success: (1) progress toward reaching human capital goals and (2) the contribution of human capital activities toward achieving programmatic goals. Periodic measurement of an agency’s progress toward human capital goals and the extent that human capital activities contributed to achieving programmatic goals provides information for effective oversight by identifying performance shortfalls and appropriate corrective actions. Further, evaluating the contribution of human capital activities toward achieving an agency’s goals may determine that its human capital efforts neither significantly helped nor hindered the agency from achieving its programmatic goals. These results could lead the agency to revise its human capital goals to better reflect their relationship to programmatic goals, redesign programmatic strategies, and possibly shift resources among human capital initiatives. However, our previous work showed that developing meaningful outcome-oriented performance goals and collecting performance data to measure achievement of these goals is a major challenge for many federal agencies. Corporate University officials acknowledged challenges associated with developing outcome-based performance measures. An official noted that it was difficult to develop measures that are meaningful to the agency. For example, the official noted that maintaining alignment of training and development with the agency’s goals is important, but it was difficult to develop a measure for organizational alignment. Therefore, to gauge organizational alignment, Corporate University uses the number of senior level meetings to determine workforce and skill needs as a measure. Officials also noted that several outcome measures carry over into the divisions and that it is difficult to determine how Corporate University’s training programs impact other divisional scorecards. However, Corporate University officials want to obtain outcome-based performance measures and stated that they would continue to refine and improve their scorecard as they gain more experience. While the draft scorecard currently includes an output performance measure for the Corporate Employee Program, it does not yet include outcome-based performance measures. Absent the use of outcome-based performance measures, especially for key initiatives like the Corporate Employee Program, FDIC will not know whether its programs are effective at achieving its mission and its human capital goals. Further, not having these measures could limit FDIC’s ability to determine whether to modify or eliminate ineffective training programs. FDIC, as a supervisor of banks and thrifts that evaluates safety and soundness, as well as the insurer of deposits, has risk assessment and monitoring at the core of its mission. To manage risk, FDIC uses information from front-line supervision of individual institutions and a range of activities examining trends and economic forces affecting the health of banks and thrifts generally. Following industry consolidation in recent years, failure of large institutions presents the most significant threat to FDIC’s deposit insurance fund, due to the asset size of the institution and the complexity of its activities. Thus, if losses grew high enough, the insurance fund could be exhausted. FDIC has both broad plans and specific strategies for handling troubled institutions, and FDIC has evaluated a wide variety of its risk activities. But some of FDIC’s evaluations were not done regularly or comprehensively. Defining clear responsibility for monitoring and evaluation of its risk activities could assist FDIC in addressing or preventing weaknesses in its evaluations. Our generally accepted standards for internal control identifies risk assessment as one of five key standards that both define the minimum level of quality acceptable for internal control in government as well as provide the basis against which an organization’s internal controls are evaluated. Proper internal control should, among other things, provide for an assessment of risk an agency faces from external sources. FDIC takes a dual approach to assessing and monitoring risk. FDIC’s front-line for risk assessment is supervision of individual institutions, where it is the primary federal regulator of thousands of banks and thrifts. It is also the backup regulator for thousands of other institutions directly supervised by one of the other three federal regulatory agencies for banks and thrifts. In addition to its supervision of individual institutions, FDIC also conducts broad monitoring and analysis of risks and trends in the banking industry as a whole. At the individual institution level, FDIC’s main risk assessment activity is the safety-and-soundness examination process, agency officials told us. Like other federal banking regulators, FDIC must generally conduct a full- scope, on-site examination for each institution it regulates at least once every 12 months, although the agency can extend the interval to 18 months for certain small institutions. For institutions that require additional attention, FDIC may supplement regularly scheduled examinations with more frequent examinations or visitations. Recognizing that a bank or thrift’s condition can change between on-site examinations, FDIC officials told us the agency created eight risk measurement models to monitor risk from off-site, which often use financial information reported by the institution. The agency’s major off- site monitoring tool is the Statistical CAMELS Off-site Rating system (SCOR), which helps FDIC identify institutions that have experienced significant financial deterioration. The SCOR off-site monitoring system attempts to identify institutions that received a rating of 1 (no cause for supervisory concern) or 2 (concerns are minimal) on their last examination—the top two grades available on the five-point CAMELS scale—but whose financial deterioration may cause a rating of 3 or worse (cause for supervisory concern and requires increased supervision to remedy deficiencies) at the next examination. The significance of the 3 rating is that once a banking regulator rates an institution as 3 or worse, FDIC monitors it more closely. The SCOR system uses a statistical model that compares examination ratings with financial ratios of a year earlier and attempts to forecast future ratings. As discussed later in this report, evaluations of the SCOR system determined that the system is informative, but does not always produce accurate results. Owing to the potential for larger losses to the insurance fund, FDIC officials told us the agency also puts special emphasis on monitoring the nation’s largest financial institutions, based on asset size. For example, FDIC’s Large Insured Depository Institution program gives heightened scrutiny to institutions with assets of $10 billion or more. For those with $25 billion in assets or more, managers submit quarterly assessments. For those with $50 billion or more in assets, FDIC also requires risk assessment plans that address risk the institution presents from the perspectives of supervision, insurance, and resolution. Further, FDIC maintains examiners on-site at the six largest institutions. While FDIC is not the primary regulator of these institutions, it is nevertheless responsible for insuring them. For the largest institutions for which FDIC is the primary regulator, the agency uses what it calls a continuous supervision process for examinations, which provides ongoing examination and surveillance of institutions with assets greater than $10 billion. Four institutions are now receiving such scrutiny. Additionally, FDIC has in recent years made significant changes to its examination process. It has adopted the MERIT program (Maximum Efficiency, Risk-focused, Institution Targeted examinations), which seeks to tailor examinations to risks presented by individual institutions. Under this approach, safer institutions should receive less attention, while riskier institutions should receive more regulatory scrutiny. FDIC officials stated that the MERIT program is more efficient, allowing examiners to spend less time on-site at well-rated institutions, while providing an opportunity to redirect examination resources to institutions posing higher risks. For example, if an institution maintains what examiners decide is an effective asset review program, the examiners will significantly reduce the time spent reviewing individual credits. Today, banks or thrifts that meet certain criteria are eligible for the MERIT program. In addition to its oversight of individual institutions, FDIC conducts a wide range of other activities to monitor and assess risk at a broader level, from a regional perspective on up to a national view (fig. 5). In 2003, FDIC formed Regional Risk Committees in each of FDIC’s six regional offices. The Regional Risk Committees review and evaluate regional economic and banking trends and risks and determine whether the agency should take any action in response. Comprised of senior regional executives plus relevant staff members, the committees meet semi-annually, and consider a wide range of risk factors— such as economic conditions and trends, credit risk, market risk and operational risk—as a prelude to identifying a level of concern, a level of exposure, and supervisory strategy. Strategy options include such tools as publishing research or circulating relevant information to the banking community, making the risk factor a priority in on-site examinations, or highlighting the factor for off-site monitoring activities. In FDIC’s San Francisco Regional Office, we observed a meeting of the western region’s Regional Risk Committee. These FDIC regional officials had compiled detailed research on a comprehensive range of potential risk factors that could affect the health of the region’s banks and thrifts. The FDIC regional risk committees prepare reports of their results and distribute them to the National Risk Committee. The National Risk Committee, comprised of senior FDIC officials, meets on a monthly basis to identify and evaluate the most significant external business risks facing FDIC and the banking industry, according to FDIC officials. For example, recent committee work has focused on the effect of recent hurricanes on Gulf Coast institutions, the trend in number of problem institutions, and bank and thrift vulnerability to rising interest rates. Where necessary, the committee develops a coordinated response to these risks, including strategies for both FDIC-supervised and -insured institutions. Among other things, the National Risk Committee receives the Regional Risk Committee reports filed from across the country. The Risk Analysis Center (RAC) is an interdivisional forum for discussing significant, cross-divisional, risk-related issues. FDIC officials use the Risk Analysis Center as a vehicle to bring together managers from across major FDIC divisions, in an effort to coordinate and provide relevant information to FDIC decision-makers. The Risk Analysis Center provides reports and analyses to the National Risk Committee. The National Risk Committee and regional risk committees also contribute ideas to the Risk Analysis Center on issues for discussion. Recent examples of the center’s work include response to Hurricane Katrina, when the center’s management committee met to discuss deployment of FDIC offices and personnel to the relief effort, and work following the August 2003 blackout in the Northeast and Midwest, when officials assembled shortly after the power failure in order to discuss its possible impact on the banking system. One key product of the Risk Analysis Center is the “RAC Dashboard”— a group of graphically displayed statistics that identify key banking and economic trends. For example, the center’s national dashboard features trend lines charting economic conditions, large bank risk, credit risk, market risk, supervisory risk, and financial strength. FDIC officials told us these indicators allow comparison of current conditions to historical extremes and have the ability to identify areas where risks may be increasing. A Risk Analysis Center web site has a variety of risk-related information, including FDIC publications and presentations available for supervisors, field examiners, and others. The site offers guidance on topics such as concentration in real estate lending, interest rate risk management, and best practices for maintaining operations during natural disasters. Division of Insurance and Research FDIC’s Division of Insurance and Research also plays a significant role in FDIC’s risk activities. The division has a leading role in preparing a key set of reports delivered to FDIC’s board of directors twice each year. The board uses these reports as a basis for setting the deposit insurance fund’s premium schedule; thus, the reports undergird FDIC’s basic mission of protecting insured deposits. One of these reports, known as the “Risk Case,” summarizes national economic conditions and banking industry trends, plus discusses emerging risks in banking. The second of the two reports, known as the “Rate Case,” recommends a premium schedule based on an analysis including likely losses to the fund from failures of individual institutions; expenses of resolving failed institutions; insurance fund operating expenses; growth of insured deposits; investment income; and the effect of premiums on the earnings and capital of insured institutions. The division also conducts pertinent research on specific topics or more general issues. For example, FDIC officials told us that when interest rates recently started upward, the division evaluated what the effect might be nationally, then conducted stress tests on certain institutions to see how the increase might affect them. More broadly, the division has compiled a history of the banking crisis of the 1980s and early 1990s. In the last 2 years, FDIC has tried to enhance its research capability, through its Center for Financial Research. Officials told us they want stronger ties to academia, and believe better research leads to better policy. On a quarterly basis, FDIC’s Financial Risk Committee recommends an amount for the deposit insurance fund’s contingent loss reserve—the estimated probable losses attributable to failure of insured institutions in the coming 12 months. Because the size of the reserve reflects beliefs about risk facing the insurance fund, the committee’s recommendations are an important part of the risk function. The Financial Risk Committee consists of senior representatives from major FDIC divisions. In addition to internal deliberations, FDIC staff members also meet with other banking regulators to discuss problem institutions for which a reserve may be necessary. Various parts of the FDIC organization also work together to carry out their risk assessment and monitoring functions. For example, the National Risk Committee recently directed the Risk Analysis Center to investigate possible risks associated with collateralized debt obligations. The Chicago Regional Office Regional Risk Committee produced a presentation for the National Risk Committee on housing and banking conditions in southeast Michigan, where business difficulties of the U.S. automobile industry have hurt the local economy and with it, the fortunes of local financial institutions. Similarly, an examiner with commercial real estate experience recently visited the Florida panhandle and nearby Alabama, reviewing bank files and visiting larger condominium developments. The examiner’s findings were presented at the Risk Analysis Center to representatives from FDIC’s main divisions—the Divisions of Insurance and Research, Supervision and Consumer Protection, and Resolutions and Receiverships. There, officials judged the information important enough to send up to the National Risk Committee. Division managers in the Risk Analysis Center also discuss the Risk Case before it is presented to the National Risk Committee. Meanwhile, the Division of Insurance and Research has managers in regional offices, where they monitor conditions locally and consult with examiners in the Division of Supervision and Consumer Protection who are working in individual institutions. Information these managers gather is sent to the Risk Analysis Center and the National Risk Committee. Notwithstanding its own activities, FDIC officials told us that cooperation with other federal banking regulators is an important part of their risk management efforts as well. Toward that end, the agency engages in a number of activities with the other regulators. One program is the Shared National Credit Program. Established in 1977, the program is a cooperative effort among four federal banking regulators to perform a uniform credit analysis of loans of at least $20 million that three or more supervised financial institutions share. With $1.9 trillion in credit commitments to more than 4,800 borrowers, these loans have the potential for significant impact on the banking system and the national economy. The program’s 2006 annual report showed that as the volume of syndicated credits has risen rapidly, the percentage of commitments adversely rated has held steady and remains well below a recent peak in 2002 to 2003. In addition to the Shared National Credit Program, FDIC is involved in other interagency risk management activities, such as: FDIC participates in the Federal Financial Institutions Examination Council with the other federal banking regulatory agencies. This program prescribes uniform examination standards and makes recommendations to promote uniformity in financial institution supervision. FDIC exchanges examination reports with the other federal banking regulators and state banking authorities. FDIC officials told us that they regularly attend interagency meetings, both formal and informal, at the field, regional, and headquarters office levels, on topics ranging from institution-specific to industrywide issues. For example, FDIC consults with staff from the other agencies in preparing the Risk Case report described earlier. The agencies jointly issue examination and industry guidance on risk- related topics. Recent work includes guidance on nontraditional mortgage risks, to clarify how institutions can offer nontraditional mortgage products in a safe and sound manner, and developing guidance on risks of concentration in commercial real estate lending. FDIC told us that they frequently invite officials from the other banking agencies to participate in Risk Analysis Center presentations on a variety of issues. Because FDIC insures many institutions for which it is not the primary federal regulator, information-sharing among federal banking regulators is a concern to FDIC. FDIC officials told us that working relationships with the other regulators are good and better than ever before. In 2002, the agencies reached an information-sharing agreement, which provides FDIC information and access to selected large institutions and others presenting a heightened risk to the deposit insurance fund. Two important drivers of this cooperative effort are to avoid sending potentially mixed signals to the regulated entities and the public about regulators’ supervisory activities and to reinforce that it is critical for FDIC, as the potential receiver for failed institutions, to understand well what is happening in non-FDIC regulated institutions, especially large ones. While this agreement represents a positive step, a senior FDIC official told us that the current information-sharing provisions are not adequate. As institutions grow more complex, it becomes harder, without more complete information on their activities, for FDIC to properly price insurance coverage as well as to work out assets during resolution, according to the official. One way FDIC is currently seeking to address such issues is through an advanced notice of proposed rulemaking in which FDIC sought comments on options to modernize its deposit insurance determination process by requiring the largest banks and thrifts to modify their deposit account systems to speed depositors’ access to funds in the event of a failure. Today, institutions do not track the insurance status of their depositors, the agency says, yet if there is a failure, FDIC must make deposit insurance coverage determinations. Industry consolidation, and the emergence of larger, more complex institutions with millions of deposit accounts raise concerns about current methods for handling failures, according to FDIC. FDIC officials also told us they coordinate internationally with entities to share information on issues relevant to financial institutions, regulatory agencies, and insurers of financial institutions in the U.S. and abroad. For example, the officials participate on the Basel Committee, a forum for regular cooperation on banking supervisory matters. The Basel Committee is composed of senior officials responsible for banking supervision or financial stability issues from 13 countries including Belgium, Italy, Japan, and the United Kingdom. In particular, FDIC officials stated they participate three times per year in meetings of the Accord Implementation Group, a subgroup of the Basel Committee. To address the possibility of a large-scale bank failure, FDIC has developed broad plans and specific strategies. According to FDIC officials, the biggest dangers to the deposit insurance fund are large-scale bank failures. The FDIC Inspector General has warned that the banking industry’s significant increase in consolidation could result in large losses to the deposit insurance fund if a so-called megabank failed. FDIC officials told us credit risk continues to be the most important factor that could cause large banks, or a large number of banks, to fail. A sudden failure would most likely stem from rapid, widespread loss of confidence in an institution, which would generate a liquidity crisis. FDIC’s Resolutions Policy Committee is responsible for developing plans to handle potential or actual failure of the largest insured institutions. The committee, comprised of senior FDIC officials from across the agency, has developed a 12-part plan for dealing with such difficulties. In handling a failed institution, FDIC’s primary objective is to protect insured depositors. Generally, FDIC seeks to minimize the overall cost to the insurance fund. The agency also seeks to prevent uninsured depositors, creditors, and shareholders from receiving more than their legally entitled amounts. Overall, FDIC attempts to minimize the time an institution is under government control, while maximizing returns to creditors. In general, according to the plan, the resolution strategy for a large bank failure will depend on facts of the particular situation, such as characteristics of the bank, the nature and extent of the problem causing the failure, the condition of the industry and relevant financial markets, and the cost to the insurance fund. For a resolution that does not pose a systemic risk—that is, larger repercussions for the industry or national economy—FDIC will most likely choose between paying off insured deposits or establishing a bridge bank. A bridge bank is a new, temporary bank chartered to carry on the business of a failed institution until a permanent solution can be implemented. Bridge banks preserve the value of the institution until a final resolution can be accomplished. A key aim following failure is to preserve the value of an institution and business continuity through a bridge bank can be important for maintaining value and hence, a marketable franchise. In addition to the work of the agencywide Resolutions Policy Committee, FDIC’s Division of Resolutions and Receiverships—the unit most directly responsible for handling failures—has created a detailed blueprint for managing failure of a large institution. The blueprint includes strategies for establishing a bridge bank, which FDIC officials stated that in most cases was the least costly and most effective option for handling a sudden large bank failure. The plan seeks to minimize failure costs, contain the risk of troubles spreading beyond a failed bank or thrift, ensure prompt access to depositor funds, and preserve the FDIC insurance fund in the face of losses that could exhaust it. Some of these objectives, according to the plan, will conflict; most notably, tension between the least-cost approach and the potential systemic risk implications of a large-scale failure. The least-cost approach adheres to a principle of not providing FDIC insurance to uninsured depositors and also focuses on maintaining the franchise of the failed institution, because the value of the failed bank’s franchise will mitigate the overall failure cost. Most, but not all, large banks will have a valuable franchise at the point of failure, according to FDIC officials. The agency says it is doubtful FDIC will have the opportunity to find an acquirer for a troubled large bank prior to failure. FDIC cites several reasons for this. Failure or near-failure of a large bank could happen very quickly with relatively little prior warning; as a result, there could be very limited opportunity to gather and analyze information about an institution’s operations prior to failure. Also, extensive negotiations with potential acquirers would be required, and it is likely such activity would become publicly known, which could spark a liquidity crisis. As discussed earlier, FDIC has sharply reduced its workforce, which today is down 80 percent since its peak in the early 1990s during the banking crisis. FDIC headquarters officials maintain that the smaller staff has not hurt the agency’s ability to monitor and assess risk—because as FDIC has shrunk, so too has the number of institutions through industry consolidation. The officials do acknowledge that industry troubles could require additional resources. As a result, FDIC has created a three-part strategy for dealing with an increase in troubled or failed institutions: developing workforce flexibility, such as that provided by the Corporate Employee Program, where both newer and more experienced employees previously cross-trained in several areas of FDIC resolutions and receiverships operations would be temporarily reassigned from other divisions to handle failure and resolution duties; recalling FDIC retirees for temporary duty; and hiring contractors for temporary duty. Overall, FDIC officials told us they do not believe there is any scenario for banking troubles that the agency would be unable to handle. But they acknowledge there could be two significant issues: if losses grew large enough, the insurance fund could be exhausted, requiring the Treasury Department to issue debt; and if sufficiently large institutions failed, there could be so many deposit claims that payoffs would be delayed. However, the agency’s goal is to manage any institution failure to avoid these events. FDIC officials told us that evaluation and monitoring of its risk assessment activities are critical parts of the agency’s mission and that such activities are ingrained in the organization. In addition to identifying risk assessment as a key internal control, our internal control standards also detail how an effective internal control system should include continuous monitoring and evaluation as an integral part of the agency’s operations. This monitoring includes regular management and supervisory activities, comparisons, and reconciliations, among other activities. An example of continuous monitoring is FDIC’s “continuous supervision” process for large institutions, as described earlier. FDIC officials also told us that they rely on us and the FDIC Inspector General to conduct such reviews, and our internal control standards acknowledge that evaluations may be performed by the Inspector General or an external auditor. However, the standards also say that organizations should themselves undertake internal evaluations that form “a series of actions and activities that occur throughout an entity’s operations and on an ongoing basis.” Our review of the evaluations and monitoring that FDIC provided to us indicates that FDIC has not comprehensively evaluated the full range of its risk activities in a routine way that is part of ongoing agency operations. When we reviewed several evaluations that FDIC provided, we found that though FDIC has evaluated or is in the process of evaluating a wide variety of risk activities, some of the evaluations appeared to be incomplete or were not conducted on a regular basis. The following examples illustrate these weaknesses: When we asked FDIC officials for any evaluation of a recent, key change in risk management strategy—specifically, FDIC’s adoption of risk-focused supervisory examinations under the MERIT program discussed earlier— officials cited two reports by the Inspector General’s office. These reports were mostly favorable, although they reviewed only portions of the MERIT program, not its overall scope. However, MERIT is a program that FDIC itself should comprehensively review because of the program’s relative newness and its core role in identifying areas of risk. Also, some examiners to whom we spoke in FDIC field offices voiced concerns that the streamlined examinations under the MERIT program may fail to detect significant problems. Though FDIC officials in headquarters thought this concern may have been exaggerated, regular reporting of evaluations and monitoring could address these concerns. Recently, FDIC’s Regional Office in Atlanta completed a draft report on the MERIT examination approach, which recommended further study of MERIT as part of a broader review of examination programs. When we asked for evaluations of FDIC’s eight off-site monitoring systems discussed earlier, FDIC provided documentation showing one-time evaluations of the accuracy of two off-site monitoring systems. One of these evaluations reviewed the Statistical CAMELS Off-site Rating (SCOR) system, which, as noted earlier, is the agency’s major off-site monitoring tool and is used to identify institutions that have experienced significant financial deterioration. In the evaluation of the SCOR system—completed in 2003—FDIC found it performed poorly. Such a finding and FDIC’s limited evaluation of its other off-site monitoring systems underscores the need for more regular reviews. FDIC officials stated they were reviewing and seeking to improve the agency’s off-site monitoring systems. The plan for this effort, however, shows a considerable amount of work yet to be done with no scheduled completion date. FDIC has conducted simulations designed to test its plans for addressing a key risk—increase in troubled and failed large institutions. In some cases, we found these simulations to have well-conceived elements that examined important changes FDIC has made in recent years, but in other cases we determined that the simulations were not comprehensive in following FDIC’s own guidance on planning for large bank failures. For example, in 2002 FDIC conducted a simulation of the hypothetical failure of a regional bank with $60 billion in assets. However, the Division of Resolutions and Receiverships did not develop its current large-bank failure plan until 2004. The 2002 simulation, which was FDIC’s largest failure test by asset size, excluded consideration of systemic risk, which the 2004 plan emphasizes as a key issue. Thus the 2002 simulation did not test the current plan, nor did it include the type of risk FDIC identifies as significant. FDIC officials told us they did not intend to include systemic risk in this exercise. However, the guidance on planning for large bank failures underscores the importance of systemic risk, stating that “the collapse of a large bank could have profound implications for other insured depository institutions and/or elements of the economy.” Thus, this exercise—FDIC’s largest big bank failure scenario to date—excluded systemic risk. Additionally, a 2004 simulation of a $30 billion regional bank was to highlight risks in operating a bridge bank—a bank established to temporarily take over operations of a failed institution. But the simulation did not include an investigation into major decisions on how to establish the bridge bank and thus did not fully reflect processes that FDIC’s guidance says are critical to the successful opening and operation of a bridge bank. Finally, a test addressing workforce flexibility provided 3 months’ advance notice of the hypothetical closing of this large bank, while FDIC guidance says the agency should plan for failure with little or no warning. FDIC has acknowledged the value of regular testing, but officials from FDIC’s Division of Resolutions and Receiverships told us that they were stretched for resources and that simulations and tests, which take time and resources, would have to be set aside if there were an increase in troubled bank activity. Other evaluations that FDIC provided appeared to be comprehensive reviews of the specific risk activity and led to some changes, but these reviews did not appear to be done on a regular basis. For example, in 2006, a team of executives from FDIC’s major divisions reviewed the effectiveness of the Regional Risk Committees. Recommendations included better reporting and wider consideration of risk and use of video teleconferences to discuss relevant issues before and after Regional Risk Committee meetings. An FDIC directive, issued in the summer of 2006, implemented these recommendations. A team of FDIC officials in the agency’s Senior Executive Leadership Program also recently evaluated the workings of a committee that runs the Risk Analysis Center. The evaluation included recommendations on changes in the center’s mission, structure, the way it communicates with FDIC employees, and the design of its internal Web site. FDIC officials stated that the most notable change to emerge from the process was to establish a three-person standing committee to coordinate the Risk Analysis Center, replacing what had been a group with rotating membership. However, officials also told us there were no formal efforts to evaluate the center’s effectiveness. Some risk activities appear to be regularly evaluated in a broader review of FDIC operations conducted by the Division of Supervision and Consumer Protection but are not intended to comprehensively review the effectiveness of the risk activities. The division conducts a review of its operations of each of its six regional offices every 2 years. Based on documents provided by FDIC, we found that these reviews include reviews of the safety-and-soundness examinations FDIC performs as the primary federal regulator of designated banks and thrifts; enforcement actions taken to maintain institutions’ financial health; off-site reviews of institutions’ health; and operation of FDIC’s large institution oversight program. These reviews, however, vary by office and cover only selected areas of the activities. The reviews also tend to emphasize compliance with policies and procedures, rather than effectiveness of the risk activities. Although FDIC conducts some evaluations of its risk assessment activities, our work indicates that FDIC’s risk assessment framework does not clearly define how it will ensure that the evaluations of risk-related activities are thorough and conducted on a regular basis. FDIC maintains an Office of Enterprise Risk Management, but the office’s activities are more internally focused and generally do not involve external risk assessment activities of FDIC’s major operating divisions. FDIC officials told us that the agency’s chief operating officer is ultimately in charge of the risk assessment process. At the same time, FDIC officials told us the agency’s three main divisions—Supervision and Consumer Protection, Resolutions and Receiverships, and Insurance and Research—share external risk responsibilities through an interwoven structure of committees and management-directed activities. This unclear line of responsibility could be contributing to the weaknesses we identified in some of FDIC’s evaluations of its risk activities. Our internal control standards state that an effective and positive internal control environment requires an agency’s organizational structure to clearly define key areas of authority and responsibility and establish appropriate lines of reporting. Further, in implementing control standards, management is responsible for developing the detailed policies, procedures, and practices to fit the agency’s operations and to ensure that the policies, procedures, and practices become an integral part of operations. According to insurance industry officials we spoke with, there are a variety of approaches to assigning responsibility for overseeing risk assessment activities. Some organizations have a Chief Risk Officer or a committee of senior-level officials while others delegate specific responsibilities to an existing office or officials. FDIC would be more likely to address or prevent some of the weaknesses we identified by designating official(s) or an office or establishing procedures, to ensure that evaluation and monitoring of risk activities are conducted regularly and comprehensively. For example, such an office or process could address employee concerns about MERIT by ensuring there are regular reviews and also identify and address potential resource constraints that can limit the number and breadth of large-bank failure simulations. By not clearly providing for oversight of monitoring and evaluating risk-related activities, FDIC is vulnerable to the risk of gaps or inefficiencies in its risk assessment process and will not know whether all parts of its risk management framework are effective. Our limited observations of the interactions between FDIC’s board of directors, their deputies, and senior management within the agency suggests that FDIC’s board of directors is engaged in the agency’s operations and effectively uses the information provided to the directors to assist in its oversight of the agency. The board has also established a clear and transparent relationship between the board of directors and the organization’s management by delegating a wide range of activities to FDIC divisions. These delegations have been broadly reviewed on certain occasions and limited changes have been made to delegations granted by the board, both through a formal process and upon request by board members or FDIC divisions. These review processes help ensure that FDIC’s delegations are appropriate and that FDIC employees are not making decisions that should be made by the board or more senior officials. FDIC has undertaken a number of activities to strengthen its human capital framework and also evaluates many of its human capital strategies. Specifically, FDIC’s Corporate University is implementing a scorecard to monitor progress of training and development initiatives toward meeting agency goals. Although this effort is commendable, the scorecard does not yet include fully developed, outcome-based performance measures that would help determine the effectiveness of FDIC’s training and development initiatives at achieving the agency’s human capital goals. Though developing outcome-based performance measures is difficult, they are nevertheless important for ensuring that FDIC has information to determine whether to modify or redesign existing training programs or eliminate ineffective programs. At a minimum, identifying outcome-based performance measures will ensure that FDIC can begin collecting appropriate information that will help in determining how key initiatives— such as the Corporate Employee Program, a relatively new program designed to train and develop FDIC’s future workforce—contribute to the agency’s mission and goals. Evaluating and measuring the effectiveness of the Corporate Employee Program is especially important given the differences in opinion we observed between regional and headquarters officials on the relative merits of the program. Such differences reinforce the need for conducting evaluations of the effectiveness of key human capital initiatives, developing performance measures to determine whether the initiatives assist in achieving the agency’s mission and human capital- related goals, and communicating the results to employees at all levels within the agency. FDIC has developed an extensive system for managing risk and has developed structures and processes to ensure that the various parts of the agency are working together to address key risks facing the agency. However, our review identified some weaknesses in FDIC’s evaluations and monitoring of its risk assessment activities. Though FDIC has conducted reviews of many parts of its risk assessment activities, it has not developed a process for more routine evaluations and assessments, and its risk management structure does not clearly define how monitoring and evaluation of risk assessment activities are overseen. Clearly defining how the agency will monitor and evaluate its risk activities could assist FDIC in addressing or preventing weaknesses in its evaluations. Based on our review of human capital and risk assessment programs at FDIC, we are making the following two recommendations to the Chairman of FDIC: To ensure that it can measure the contribution that key human capital initiatives make toward achieving agency goals, FDIC should take steps to identify meaningful, outcome-based performance measures to include in its training and development scorecard and communicate available performance results to all FDIC employees. To strengthen the oversight of its risk assessment activities, FDIC should develop policies and procedures clearly defining how it will systematically evaluate and monitor its risk assessment activities and ensure that required evaluations are conducted in a comprehensive and routine fashion. We provided a draft of this report to FDIC for review and comment. In written comments (see app. II), FDIC generally agreed with the report and the recommendations. FDIC stated it was committed to building and maintaining a knowledgeable and flexible workforce and is in the process of developing a comprehensive set of outcome-based performance measures to assist in determining the effectiveness of key training and development programs. FDIC also described its plans to conduct extensive evaluations of two of its human capital initiatives, the Corporate Employee Program and Professional Learning Accounts. These evaluations are intended to utilize outcome-based performance measures in order to provide FDIC with information on the extent to which the programs’ goals are achieved. FDIC also agreed that the agency would benefit from a review of its risk management activities to ensure they are comprehensive, appropriate to the agency’s mission, and fully evaluated. Accordingly, the agency has assembled a committee to perform an in-depth review of its current risk assessment activities and evaluation procedures. The committee will make recommendations for strengthening the agency’s risk assessment framework and FDIC executive management will establish a plan for implementing the committee’s recommendations. FDIC also provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Chairman of the Federal Deposit Insurance Corporation, interested congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2717 or at jonesy@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This report responds to a mandate included in the Federal Deposit Insurance Reform Conforming Amendments Act of 2005 requiring the Comptroller General to report on the appropriateness of FDIC’s organizational structure. Specifically, this report focuses on three areas that influence the effectiveness of FDIC’s organizational structure and reflect key internal controls: (1) mechanisms used by the FDIC board of directors to oversee and manage the agency; (2) FDIC’s human capital strategies and how training and development programs are evaluated; and (3) FDIC’s process for monitoring and assessing risks to the industry and the deposit insurance fund and how that process is overseen and evaluated. To describe how FDIC’s board of directors oversees and manages the agency, we reviewed FDIC’s enabling legislation, bylaws, and other governance documents to understand the legal authority, oversight responsibilities, and structure of FDIC and its board of directors and standing committees. We also reviewed our reports and literature on characteristics of boards of directors to identify management issues and common practices among boards of directors. We met with knowledgeable academicians and researchers to gain a better understanding of management practices at organizations overseen by boards of directors. To obtain more information on how FDIC’s board manages and oversees the agency, we conducted interviews with members of FDIC’s current board of directors and the board’s Audit Committee members. We developed a standardized interview guide, and used the same set of questions for each interview session. To obtain independent views from board members, we met with each board member separately; each board member’s deputies or other senior staff also participated in the interviews. We also attended two FDIC board meetings and held additional interviews with former FDIC officials to gain a broader understanding of governance at FDIC. To gain a better understanding of one mechanism for managing the agency, delegations of authority, we interviewed officials in FDIC’s Legal Division and reviewed FDIC’s master set of delegations to FDIC divisions and officers as well as a directive describing the process for issuing delegations. We also consulted our Standards for Internal Control in the Federal Government to determine how delegations of authority affect an agency’s internal control environment. To describe FDIC’s human capital strategies, we gathered and analyzed information from a variety of sources. We reviewed our guidance and reports on federal agencies’ workforce planning and human capital management efforts to identify recommended strategic workforce planning principles for high performing organizations. We reviewed relevant work of FDIC’s Office of the Inspector General and obtained documentation of certain findings from previous Inspector General reports related to FDIC’s human capital strategic planning. We interviewed FDIC officials on the Human Resources Committee, senior managers in various FDIC divisions, and officials in Corporate University to obtain information on how critical skill needs and skill gaps are addressed and how FDIC develops and implements human capital initiatives, including training and development programs. We also obtained and reviewed documentation of FDIC’s human capital goals and how FDIC’s primary divisions track their progress toward meeting those goals. To determine how FDIC evaluates its training and development programs, we interviewed Corporate University officials and obtained relevant documentation. We also consulted our report, Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government, to obtain information and criteria on evaluating training programs. To examine the extent to which FDIC monitors, assesses, and plans for risks facing banks and thrifts, the industry as a whole, and the deposit insurance fund, we interviewed FDIC officials in divisions directly responsible for risk-related activities, such the Divisions of Supervision and Consumer Protection and Resolutions and Receiverships. We obtained and reviewed written and testimonial information on FDIC’s risk management activities, plans for addressing the biggest dangers to the industry and insurance fund, and FDIC’s methods for evaluating its risk management activities. We examined research reports and papers describing the implications of financial institution failures, documentation of the agency’s examination procedures, and various documents related to the work of FDIC’s Risk Analysis Center, National Risk Committee, and Resolutions Policy Committee. We also attended a presentation of the Risk Analysis Center to understand its role and function as part of FDIC’s risk management activities and observed a meeting of one of FDIC’s six Regional Risk Committees. In addition, we examined our own guidance, including our Standards for Internal Control in the Federal Government, to determine how risk monitoring and assessment activities help provide effective internal controls. Finally, to address all three objectives in this report, we conducted site visits to FDIC regional and field offices in three states (California, Georgia, and Texas). The purpose of the site visits was to obtain more in-depth information on the FDIC board of directors’ management and oversight responsibilities; issues related to human capital, workforce planning, and training and development; FDIC’s methods for identifying, assessing, and monitoring risk; and FDIC’s methods of evaluating its progress toward meeting agency goals. In each state, we conducted interviews with senior managers from FDIC’s three main divisions and the Human Resources Branch; analysts and economists in the Division of Insurance and Research; case managers in the Division of Supervision and Consumer Protection; and financial institution examiners in the Division of Supervision and Consumer Protection. Additionally, in Dallas, Texas, we interviewed staff within FDIC’s Division of Resolutions and Receiverships because the Dallas office is where resolutions and receiverships activities are centered. We developed a standardized interview guide for each group of employees we interviewed, and used the same set of questions for each interview session. To encourage open communication, we met with each group of employees separately, and except in one instance, subordinate employees were interviewed separately from their managers. We judgmentally selected the states based on the following characteristics: staffing levels in each regional and field office; the number and size of FDIC-supervised institutions located in a particular region; regional and field office structure; geographic dispersion; recommendations of officials from FDIC’s Office of Inspector General; and proximity of the field office to the regional office coupled with time and travel resources. To assess the reliability of the employment data presented and discussed in the background section of this report, we (1) reviewed existing information about the data and the system that produced them and (2) interviewed agency officials knowledgeable about the data. For FDIC data on overall employment from 1991-2006, we performed some basic reasonableness checks of the data against data from the Office of Personnel Management’s Central Personnel Data File (CPDF). When we found discrepancies, such as considerable differences between data from the two sources, we brought them to the agency’s attention and worked with a data analyst at FDIC to understand the discrepancies before conducting our analyses. For employment trends by occupation, FDIC was unable to provide accurate data for years prior to 2001 due to the integration of several legacy systems and databases. Therefore, we used data from the CPDF to approximate employment data by occupation. Although FDIC officials noted certain limitations of the CPDF data, they stated that the data were accurate within a sufficient margin of error for reporting of governmentwide workforce demographics and trends. After reviewing possible limitations in FDIC’s overall employment data and CPDF data by occupation, we determined that all data provided were sufficiently reliable for the purposes of this report. We conducted our work in California, Georgia, Texas, and Washington, D.C., from May 2006 through January 2007 in accordance with generally accepted government auditing standards. In addition to the individual named above, Kay Kuhlman, Assistant Director; Kenrick Isaac, Jamila Jones, Alison Martin, David Pittman, Omyra Ramsingh, and Christopher Schmitt made key contributions to this report.
The Federal Deposit Insurance Reform Conforming Amendments Act of 2005 requires GAO to report on the effectiveness of Federal Deposit Insurance Corporation's (FDIC) organizational structure and internal controls. GAO reviewed (1) mechanisms the board of directors uses to oversee the agency, (2) FDIC's human capital strategies and how its training initiatives are evaluated, and (3) FDIC's process for monitoring and assessing risks to the banking industry and the deposit insurance fund, including its oversight and evaluation. To answer these objectives, GAO analyzed FDIC documents, reviewed recommended practices and GAO guidance, conducted interviews with FDIC officials and board members, and conducted site visits to FDIC regional and field offices in three states. FDIC's five-member board of directors is responsible for managing FDIC. Information and communication channels have been established to provide board members with information on the agency's operations and to help them oversee the agency. The board also has four standing committees for key oversight functions. For example, the audit committee primarily oversees the agency's implementation of FDIC Inspector General audit recommendations. Finally, because the board cannot oversee all day-to-day operations, the board delegates certain responsibilities to senior management. FDIC has procedures for issuing and revising its delegations of authority, which help ensure that the delegations are appropriate for its current structure and banking environment. FDIC has reviewed specific delegations on occasion at the request of a board member, management, and more recently in response to an Inspector General report's recommendation. Management of human capital is critical at FDIC because the agency's workload can shift dramatically depending on the financial condition of the banking industry. FDIC uses an integrated approach, where senior executives come together with division managers, to develop human capital initiatives, and the agency has undertaken activities to strengthen its human capital framework. FDIC created the Corporate Employee Program to develop new employees and provide training in multiple disciplines so they are better prepared to serve the needs of the agency, particularly when the banking environment changes. Some FDIC employees thought the program had merit, but they expressed concerns about whether certain aspects of the program could slow down the development of expertise in certain areas. FDIC, through its Corporate University, evaluates its training programs, and officials are developing a scorecard that includes certain output measures showing progress of key training initiatives towards its goals. Officials told us that they would like to have outcome measures showing the effectiveness of their key training initiatives but have faced challenges developing them. However, outcome measures could help address employee concerns and ensure that the Corporate Employee Program achieves the agency's goals. FDIC has an extensive system for assessing and monitoring external risks. FDIC's system includes supervision of individual financial institutions and analysis of trends affecting the health of financial institutions. FDIC has also developed contingency plans for handling the greatest dangers to the deposit insurance fund--particularly the failure(s) of large institutions. In addition to risk assessment, a key internal control is monitoring risk assessment activities on an ongoing basis. FDIC has evaluated several of its risk activities, but most of the evaluations we reviewed were not conducted regularly or comprehensively. For example, some simulations of its plans for handling large bank failures were either out of date or inconsistent with FDIC's guidance. Developing policies and procedures and clearly defining how it will monitor and evaluate its risk activities could assist FDIC in addressing or preventing weaknesses in its evaluations.
While the U.S. food supply is generally safe, each year, according to a Centers for Disease Control and Prevention (CDC) estimate, tens of millions of Americans become ill and thousands die from eating unsafe food. Furthermore, USDA’s Economic Research Service has estimated that the costs associated with foodborne illnesses are about $7 billion, including medical costs and productivity losses from missed work. The safety and quality of the U.S. food supply is governed by a complex system that is administered by 15 agencies. The principal federal agencies with food safety responsibilities operate under numerous statutes underpinning the federal framework for ensuring the safety and quality of the food supply in the United States. These laws give the agencies different regulatory and enforcement authorities, and about 70 interagency agreements aim to coordinate the combined food safety oversight responsibilities of the various agencies. The federal system is supplemented by the states, which have their own statutes, regulations, and agencies for regulating and inspecting the safety and quality of food products. USDA and FDA have primary responsibility for ensuring the safety of foods. USDA’s Food Safety and Inspection Service regulates meat, poultry, and certain egg products, and FDA regulates the safety of all other foods, including milk, seafood, and fruits and vegetables. The Environmental Protection Agency (EPA) sets limits on the amount of pesticide residues that are allowed in food, and the National Marine Fisheries Service within the Department of Commerce provides fee-for-service inspections to ensure the safety and/or quality of commercial seafood. Similarly, USDA’s Agricultural Marketing Service (AMS) performs food quality assurance inspections that include food safety elements. In addition to their established food safety responsibilities, USDA, FDA, and EPA, along with the Department of Homeland Security, have begun to address food security. Figure 1 summarizes the 15 agencies and their food safety responsibilities. Many proposals have been made to consolidate the U.S. food safety system, but to date no action has been taken. Several bills introduced in the Congress, reports by the National Academy of Sciences and the National Commission on the Public Service, and several of our reports and testimonies have proposed consolidation of the U.S. food safety system. For example, in 2001, parallel Senate and House bills proposed consolidating inspections and other food safety responsibilities in a single independent agency. In 2004, legislation was again introduced in the Senate and the House to establish a single food safety agency to protect public health, ensure food safety, and improve research and food security. This proposed legislation would combine the two food safety regulatory programs of USDA and FDA, along with a voluntary seafood inspection program operated by the Department of Commerce. The proposed new food safety program would have been based on a comprehensive analysis of the hazards associated with different foods and their processing and would require, among other things, the enforcement of the adoption of process controls in food establishments as well as the establishment and enforcement of science-based standards. In 1998, the National Academy of Sciences recommended integrating the U.S. food safety system and suggested several options, including a single food safety agency. More recently, the National Commission on the Public Service recommended that government programs that are designed to achieve similar outcomes be combined into one agency and that agencies with similar or related missions be combined into large departments. The Commission chairman testified before a House subcommittee that important health and safety protections fail when responsibility for regulation is dispersed among several departments, as is the case with the U.S. food safety system. The division of responsibility among several government agencies responsible for food safety is not unique to the United States. Food safety officials in the countries we selected for this review said they faced similar divisions of responsibilities and that their countries’ reorganizations were intended to address this problem. Although the seven countries whose food safety systems we reviewed are much smaller in population than the United States, they, like the United States, are high-income countries where consumers have very high expectations for food safety. Table 1 presents data on population, size of economy, food expenditures, and consumer spending for food as a percentage of total consumer spending for the seven countries and the United States. The table shows that U.S. consumers’ spending on food as a percentage of their total spending is somewhat similar to that of the other seven countries, ranging from about 10 percent in the United States to over 16 percent in Ireland and the United Kingdom. In general, high-income countries tend to spend a smaller percentage of their income on food than low-income countries. For instance, in low-income countries consumers’ spending for food often exceeds 50 percent of their total spending. Most of the countries we selected for this review are members of the EU and, as such, must abide by EU food safety legislation. The development and implementation of EU food safety legislation is the responsibility of the Health and Consumer Protection Directorate General. In 2002, to respond to consumer concerns about the safety of the food supply, the EU created a new independent food safety institution, the European Food Safety Authority (EFSA), which is responsible for providing independent, scientific advice on all matters linked to food and animal feed safety. The tasks performed by EFSA include communicating with the public on food safety matters and providing risk assessments to the European Commission, the European Parliament, and the EU Member States. In addition to creating EFSA in April 2004, the EU adopted additional, comprehensive food safety legislation that becomes effective, in large part, on January 1, 2006. Together with the earlier regulation establishing EFSA, the legislation is intended to create a single, transparent set of EU food safety rules applicable to all food, including animal and nonanimal products. The legislation covers the entire food supply chain from production to consumption and places more requirements on EU member nations. It identifies specific food safety objectives and, unlike much of the EU’s previous food safety legislation, specifies the methods by which those objectives must be achieved. For example, it requires food business operators to adopt specific hygiene measures and a permanent procedure or procedures based on Hazard Analysis and Critical Control Point principles. Moreover, the legislation requires that each EU country establish and implement an official food and animal feed control plan by January 1, 2007. Thereafter, an annual report on the implementation of this national control plan must be submitted. To carry out its official controls over food and animal feed, a country must designate responsible entities. If a country has more than one responsible entity, it must ensure effective coordination between these entities. According to a paper presented at a 2004 international forum for food safety regulators, in recent years many EU countries have chosen to establish a national food safety authority, but the establishment of such an authority is not obligatory. The seven countries we examined had two primary reasons for reorganizing and consolidating their food safety systems, took various approaches, and often faced similar challenges. While the extent to which countries consolidated their food safety systems varied considerably, each country established a single agency to lead food safety management or enforcement of food safety legislation. Although most countries incurred some consolidation start-up costs, government officials, as well as food industry and consumer stakeholders, generally agree that consolidation has led to significant qualitative improvements in the effectiveness or efficiency of their food safety systems. Our ability to evaluate these improvements and other information officials provided was limited because none of the countries has conducted an analysis to measure the effectiveness and efficiency of its consolidated food safety system relative to that of the previous system. In some cases, it may be too early to fully assess the benefits of the countries’ consolidations. The seven countries whose reorganizations we reviewed consolidated their food safety systems primarily to improve program effectiveness and efficiency or to respond to public concern about food safety. According to a 1998 National Research Council report, an effective food safety system protects and improves the public health by ensuring that foods meet science-based safety standards through the integrated activities of the public and private sectors. This report also addresses efficiency, stating the greatest strides in ensuring food safety from production to consumption can be made through a scientific risk-based system that ensures that surveillance, regulatory, and research resources are allocated to maximize effectiveness. Public concern about food safety became an important issue in several industrialized countries during the 1990s when bovine spongiform encephalopathy (BSE), commonly known as mad cow disease, was confirmed in large numbers of cattle. In addition to improving effectiveness and efficiency and responding to public concern about food safety, some EU countries were further prompted to consolidate by the need to comply with recently approved EU food safety legislation that becomes effective, in large part, January 1, 2006. The new legislation places more requirements on EU member countries. For example, each EU member country will be required to submit and annually update a plan for the implementation of the new law and to report annually on the implementation of that plan. Regarding consolidation approaches, each country established a single agency to lead food safety management or enforcement of food safety legislation. Each country modified its existing legal framework to give legal authority and responsibility to the new food safety agency. However, countries’ approaches in consolidating their food safety systems varied, particularly with respect to how comprehensively food safety functions were consolidated. For example, Denmark centralized its system by creating a new federal agency in which it consolidated almost all its food safety functions and activities, including food inspections, which were previously distributed among several federal and local government agencies. On the other hand, in Germany, which established a new lead food safety agency, the 16 federal states continue to be responsible for oversight of food inspections performed by local governments. Germany’s new food safety agency functions as a coordinating body to lead food safety management, including formulation of general administrative rules to guide the federal states’ implementation of national food safety laws. In reorganizing their food safety systems, officials from several countries cited challenges in two areas. First, many countries faced a similar decision regarding whether to place the new agency within the existing health or agriculture ministry or establish it as a stand-alone agency while also determining what responsibilities the new agency would have. A second challenge, cited by officials in several countries, was helping employees assimilate into the new agency’s culture and support its priorities. Although countries have not formally analyzed consolidation results, the government officials and stakeholders we interviewed in each of the seven countries cited improvements in food safety system operations and stated that the net effect of consolidation has been or will likely be positive. None of the countries has conducted an analysis to measure the effectiveness and efficiency of its consolidated food safety system relative to that of the previous system. For example, officials stated that they could not determine whether their country’s consolidation had resulted in public health benefits, such as reduced foodborne illness, because consolidation was only one of many factors that could affect the frequency of foodborne illness. Furthermore, it may be too early to fully assess the benefits of consolidation for several of the countries, as their countries’ new food safety structures have been functioning for 3 years or less. Although limited, some information on costs and benefits was available. As expected, most countries incurred start-up costs, which included, for example, the acquisition of buildings and purchases of laboratory equipment. Some countries experienced a temporary reduction in the quantity of food safety activities performed due to consolidation-related disruptions. However, government officials in each of the seven countries believe the benefits of their consolidations have exceeded or will likely exceed the costs. In particular, these officials, as well as food industry and consumer stakeholders in each country, consistently stated that consolidation of their food safety systems has led to significant qualitative improvements in food safety operations that enhance effectiveness or efficiency. These improvements include the reduction of overlapping food safety activities, such as inspections of food establishments by various agencies. (Figure 2 summarizes each country’s improvements in food safety operations as cited by government officials, food industry stakeholders, or consumer stakeholders.) Moreover, government officials in Canada, the Netherlands, and Denmark stated that some cost savings may be achieved as a result of changes that have already taken place or are expected from planned changes needed to complete their consolidation efforts. Figures 3 through 9 show summary information on each country’s reasons for consolidation, entities responsible for food safety before and after consolidation, challenges, and start-up and other consolidation-related costs, as well as examples of consolidation benefits. This information was provided by government officials and food industry or consumer stakeholders. For more detailed information on each country, see appendixes II through VIII. Although different in many respects, the seven countries’ experiences provide information on the reform and consolidation of food safety systems that can be useful to U.S. policymakers. While the seven countries had to overcome challenges, their experiences show that reforming and streamlining food safety systems is possible when a consensus exists among government agencies, the food industry, and consumer organizations. As we learned from food safety officials and industry and consumer stakeholders in each country we reviewed, such reforms may result in benefits such as reducing overlaps in food safety inspections and basing the frequency of inspections on the risks posed by specific products. We have reported in the past that the federal food safety system in the United States could benefit from statutory and organizational reforms. As Congress and other policymakers consider the advantages and disadvantages of streamlining multiple existing food safety statutes into a uniform and risk-based framework and whether to consolidate federal food safety functions under a single agency, these countries’ lessons may offer useful information. We provided relevant excerpts from a draft of this report to officials of food safety agencies in Canada, Denmark, Germany, Ireland, the Netherlands, New Zealand, and the United Kingdom for their review. The officials either replied that they had no technical comments or provided technical corrections, which we incorporated into the report as appropriate. Although this report does not evaluate HHS’s or USDA’s food safety programs and, therefore, makes no recommendations to the agencies, we provided a draft copy to HHS and USDA for review and comment. In commenting on this report, both HHS and USDA stated that U.S. food safety agencies are working together effectively. HHS noted that our report provides limited quantitative data on the results of each country’s consolidation. The report clearly states that the information presented was obtained primarily through structured interviews with high-level government officials and food industry and consumer stakeholders from each of the seven countries we reviewed. In addition, our report acknowledges that these officials provided limited quantitative data; when it was provided to us, we included it in the report. Our report also acknowledges that none of the countries has conducted a formal analysis to compare the effectiveness and efficiency of its consolidated food safety system with that of the previous system. HHS also commented that the countries included in our report have smaller food and agriculture industries than the United States. We agree, and our report highlights such differences in table 1, which shows that the seven countries have smaller economies and less total food consumption than the United States. Our report also points out, however, that these countries are similar to the United States in that they are high-income countries where consumers have high expectations for food safety. Finally, HHS commented that the report does not identify the agencies that are responsible for foodborne illness surveillance in each of the countries we reviewed. We have added a footnote to indicate that the report does not contain that information. HHS also provided technical comments, which we have incorporated in the report as appropriate. HHS’s comments are presented in appendix IX. In its comments, USDA stated that the report does not contain rigorous cost-benefit analyses or quantitative data on the public health effects of the countries’ consolidations, such as changes in foodborne illness rates. Our report clearly acknowledges that we obtained limited quantitative information and that none of the countries has conducted an analysis to compare the effectiveness and efficiency of its consolidated food safety system with that of the previous system. Specifically, with regard to the effect of consolidation on public health benefits, the report states that officials told us they could not determine whether their country’s consolidation had reduced foodborne illness because consolidation was one of many factors that could influence the frequency of foodborne illness. USDA also stated that the report does not contain quantitative data on reorganization costs. This statement is incorrect. All but one of the seven countries provided information on the costs of reorganization, which the report presents in figures 3 through 9. Similar to HHS, USDA commented that the countries we reviewed have much smaller populations and also differ from the United States in climate and agricultural production. Our report identifies differences of this type in table 1. The report points out, however, that these countries and the United States have at least one important similarity: they are high-income countries where consumers have high expectations for food safety. As a result, we believe the consolidation experiences of the countries reviewed have applicability to the United States. USDA also provided technical comments, which we incorporated into the report as appropriate. USDA’s comments are included in appendix X. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Agriculture and of Health and Human Services, and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix XI. This report describes the approaches the seven countries have taken in consolidating food safety functions, the challenges they faced, and the results of the countries’ efforts, including benefits and costs cited by government officials and industry or consumer stakeholders. For the purposes of this report, we defined consolidation as the transfer of responsibility and resources for performing a food safety function from two or more agencies to a single agency. To identify countries that have consolidated their food safety systems, we interviewed USDA and FDA officials, reviewed a 2001World Health Organization report on how countries have organized their food safety systems, and reviewed the Internet sites of countries’ food safety agencies. We selected a judgmental sample of seven countries for our review based on the following criteria: USDA and FDA officials agreed that these countries have consolidated, the countries have high per capita income, and the countries have consolidated functions of their food safety system within the last 10 years. To address our objective, we examined the seven countries’ efforts to streamline and consolidate their food safety systems, including the benefits and costs that resulted, as cited by government officials. We conducted structured interviews with senior government officials from food safety agencies and with representatives of food industry or consumer organizations in each country, reviewed and analyzed the documents they provided, and reviewed World Health Organization and GAO reports. We also met with European Union (EU) food safety officials to discuss how the EU’s food legislation is affecting its member countries’ decisions to consolidate, as well as how the EU interacts with food safety agencies in member countries. The information on countries’ food safety systems in this report, including descriptions of laws, is based almost exclusively on interviews with and documentation provided by high-level food safety officials as well as food industry or consumer stakeholders from the seven countries we examined. Most of the information we obtained was qualitative. To the extent possible, we corroborated the qualitative information provided by government officials by interviewing food industry and consumer organization stakeholders. We obtained very limited quantitative information. We asked government officials questions intended to help us assess the reliability of the quantitative information they provided, but we could not determine the reliability of most of this information because of constraints on the amount of time the countries’ food safety officials could devote to our study. Although we could not assess the quantitative information’s reliability, we are reporting it in order to provide descriptive information to inform policymakers in the United States about the various approaches, challenges, and benefits that countries’ officials identified to us regarding their consolidation efforts. The data in table 2 on selected countries’ population, gross domestic product, food consumption, income, and consumer spending were used for background purposes and were not verified. We conducted our work from April 2004 through December 2004 in accordance with generally accepted government auditing standards. In 1997, Canada consolidated its food inspection activities with the creation of the Canadian Food Inspection Agency. Food safety standard setting, research, and risk assessment were consolidated within Health Canada. Canada consolidated its food safety system to (1) improve effectiveness by making inspections and enforcement more consistent, clarifying responsibilities, and enhancing reporting to the Canadian parliament, (2) improve efficiency by reducing duplication and overlap in food safety activities, and (3) reduce federal spending. Before Canada consolidated its food safety system, its food safety inspection, food policy, and risk assessment responsibilities were shared by three separate entities—Health Canada, Agriculture and Agri-Food Canada, and Fisheries and Oceans Canada. In 1997, the Canadian parliament approved the Canadian Food Inspection Agency (CFIA) Act. All food safety inspection activities were assigned to CFIA, a separate regulatory agency whose president reports to the Minister of Agriculture and Agri-Food Canada. CFIA is responsible for all food safety inspections and related activities, including inspections of imported and domestic products, export certifications, laboratory and diagnostic support, crisis management, and product recalls. CFIA is also responsible for food quality assurance inspections and animal health and plant disease control. Canada has adopted a comprehensive farm-to-table approach to food safety responsibilities. Public health policy and standard-setting responsibilities including research, risk assessment, and setting limits on the amount of a substance allowed in a food product, were consolidated within Health Canada. With the placement of food safety inspections and related activities in CFIA and risk assessment in Health Canada, inspection responsibilities were separated from risk assessment to allow for an independent scientific risk assessment process. In 1997, the Canadian Food Inspection Agency Act created the agency and gave it authority to implement existing food safety laws. Additional legislative reform intended to increase the food safety system’s effectiveness and efficiency beyond the gains already achieved was introduced in the Canadian parliament in November 2004. According to CFIA, under this proposed legislation, the authorities for CFIA inspectors contained in eight commodity-specific laws would be strengthened and made consistent. In addition, the law would give CFIA additional inspection and enforcement authorities to protect the food supply, such as the authority to hold products while awaiting test results. CFIA faced a challenge in helping staff adjust to a new organization as its work force combined former employees from the health, agriculture, and fisheries departments. According to officials, several senior officials retired when CFIA was being formed. As a result, the agency temporarily had less experience at the management level than would have been optimal. Canadian officials told us it is important to plan for the merging of organizational cultures and to bring the new work force into a dialogue on the new organization’s vision and objectives. Employee unions were initially concerned that the consolidation would lead to privatization of the food safety work force. According to a CFIA official, these concerns decreased substantially when the work force realized privatization was not planned. In addition, the official stated that labor relations have improved because unions now only have to interact with one food safety inspection agency instead of three. In addition, assignment of responsibilities was a major obstacle during the implementation phase of Canada’s consolidation. To address this challenge, Health Canada and CFIA jointly developed a food safety functions matrix to clearly define responsibilities. According to a senior official, CFIA’s fiscal year 2003 food safety spending was $360 million Canadian ($232 million U.S.). Spending for all of its activities—food safety, animal health, and plant protection—was $517 million Canadian ($334 million U.S.). User fees assessed on food establishments financed a portion of this spending. These user fees for food inspections have been frozen at about $40 million Canadian (about $26 million U.S.) since 1997 and in fiscal year 2003 accounted for about 11 percent of CFIA’s food safety spending. According to a Health Canada official, the fiscal year 2004 budget for Health Canada’s food program was about $42 million Canadian ($31 million U.S.). Reducing spending was one of Canada’s objectives in consolidating its food safety system. A senior CFIA official stated that during the agency’s first two years, fiscal years 1997 and 1998, the consolidation reduced food safety operating expenditures by 10 percent relative to preconsolidation food safety spending. The official based this statement on his knowledge of food safety operating expenditures before and after the consolidation. The official noted that Canada has not performed an analysis on the effect of its consolidation on food safety expenditures. According to the official, such an analysis would be difficult because (1) the last audited preconsolidation quantification of food safety expenditures was published in a 1994 report by Canada’s Office of the Auditor General and (2) in fiscal year 1997, when the CFIA was formed, food safety, animal health, and plant health expenditures were not separated in financial reports. The agency began separating these expenditures in financial reports beginning in fiscal year 2000. According to the CFIA official, CFIA’s work force consisted of about 5,300 employees in fiscal year 2003. According to a Health Canada official, the Health Canada food program had approximately 400 staff in fiscal year 2004. The food industry stakeholders we interviewed were consistently supportive of Canada’s consolidation. Among the consolidation-related benefits they cited were improved communications, particularly with respect to food recalls; easier interaction with regulators through having a single contact for enforcement and compliance; fewer inspectors visiting processing plants; clarification of responsibilities; and increased consistency in the enforcement of food safety laws. While expressing overall support for the consolidation, representatives of some food industry organizations cited a need for more timely decision making. For example, representatives of one organization said their member companies, which, they said, often have more technical expertise than the government authorities on specific matters, sometimes have to wait too long for a decision on a food safety question and lose commercial opportunities as a result. In addition, some industry representatives said Health Canada’s setting of standards should better reflect CFIA’s ability to enforce the standards. In 1997, Denmark consolidated its food safety system with the creation of the Danish Veterinary and Food Administration. Denmark consolidated its food safety system to (1) improve effectiveness in several areas, including communications with consumers and consistency of inspections and (2) improve efficiency in numerous ways, such as by moving resources to areas that present higher risks and reducing overlaps in responsibilities. Before Denmark consolidated its food safety system, responsibilities for inspections were shared by multiple entities, including the Ministry of Agriculture, the Ministry of Fisheries, and a large number of municipalities. Standard-setting responsibilities were shared by the Ministry of Health, the Ministry of Agriculture, and the Ministry of Fisheries. In 1997, the Danish government consolidated the multiple agencies’ activities into the Danish Veterinary and Food Administration (DVFA), an agency of the newly formed Ministry of Food, Agriculture, and Fisheries. Inspections were consolidated within the DVFA in 1999 and 2000. In August 2004, as part of a governmental reorganization, the Danish Veterinary and Food Administration was moved to the newly created Ministry of Family and Consumer Affairs. The DVFA is responsible for almost all food safety responsibilities. For logistical reasons, a few duties were not moved to the new agency in 1999 and 2000. These remaining duties are with the Plant Directorate, which is responsible for animal feed inspections, and the Directorate for Fisheries, which is responsible for inspections of fish on ships. These two agencies are in the Ministry of Food, Agriculture, and Fisheries. According to officials, the DVFA has farm-to-table food safety responsibilities. Denmark has separated risk management and risk assessment. The Danish Institute for Food and Veterinary Research, a separate institute within the DVFA, is responsible for research and risk assessment. The Danish Food Act, adopted in 1998, reformed Danish food safety law by replacing seven existing food laws with this single law. The legislation harmonizes the regulation of food of animal origin and other food, and, according to Danish officials, it is very similar to EU food legislation. Officials said when bringing employees from several agencies into a new organization, it is important to establish a common culture. The consolidation involved over 2,000 employees, including about 500 municipal inspectors who were moved to the DVFA. To foster the development of a common organizational culture, food agency officials moved employees to centralized locations and established online discussion groups to familiarize employees with the mission and culture of the organization. In addition, the division heads in the regional offices held monthly meetings with employees. Moreover, officials stressed the need to have adequate funding for start-up costs. According to officials, DVFA’s budget for 2004 was 856 million Danish kroner (about $142 million U.S.). About 61 percent of this planned spending was to be financed by user fees assessed on food establishments. User fees finance nearly all meat inspections. Officials stated that DVFA had about 2,000 employees (1,820 full-time equivalents) in 2004. The food industry and consumer organization representatives we interviewed told us their organizations supported the proposed consolidation and continue to support it. They stated that the consolidation has improved the food safety system’s effectiveness. Improvements cited include more consistent enforcement of food safety regulations, reduced overlap in inspections, streamlined communications, clearer responsibilities, and improved service delivery as a result of having a single contact. For example, a food industry representative stated that having a single contact leaves no doubt about which authority to approach when a problem or question arises. In addition, the representative said Denmark’s salmonella control program is an example of a situation where the consolidation’s clearer responsibilities have been beneficial. Without the consolidation, said the representative, this program probably would have experienced conflicts between the health and agriculture ministries and would not have been as successful. User fees are also an important consideration for the Danish food industry. As noted earlier, Danish food companies finance a significant portion of DVFA’s spending through payment of user fees. According to industry officials, the payment of these fees motivated the food industry to support reforms in the Danish food safety system that would hold down inspection costs. According to a consumer organization representative, the consolidation made food safety inspections and enforcement of laws and regulations more consistent. Before the consolidation, the representative said, many municipal food safety entities had insufficient expertise and resources and the central government’s oversight of municipal food safety entities was uneven. In addition, the representative stated that the consolidation had facilitated the development of DVFA’s online system to report the results of inspections. The representative believes having access to inspection reports has contributed to Danish consumers’ continued high confidence in the safety of their food. The German parliament approved the creation of the Federal Office of Consumer Protection and Food Safety and the Federal Institute for Risk Assessment in 2002. Germany consolidated its food safety system in response to public concerns about food safety stemming from the discovery of BSE in 2000 and other food safety problems. An additional objective was improved compliance with EU food safety legislation. Before Germany consolidated its food safety system, responsibilities for research, risk assessment, and communication were shared by the Federal Ministry of Health and the Federal Ministry of Food, Agriculture and Forestry. Responsibilities for implementation of federal legislation and oversight of inspections were shared by the sixteen federal German states, and inspections were performed by municipalities and other local governments. In 2002, the creation of two new food safety agencies was approved by the German Parliament. Both of the new agencies are in the Federal Ministry of Consumer Protection, Food, and Agriculture. The Federal Office of Consumer Protection and Food Safety, a coordinating body, is responsible for leading food safety risk management. It serves as Germany’s contact point with the European Commission, including (1) acting as a coordinator for Food and Veterinary Office audits of compliance with EU food safety legislation and (2) implementing in Germany the European rapid alert system for consumer health protection and food safety. In addition, this agency’s responsibilities include coordinating food safety surveillance at the federal level and formulating general administrative rules to guide the implementation of national food safety laws by the German federal states. The federal states continue to be responsible for implementation of food safety legislation and oversight of food inspections performed by local governments. The other new food safety agency is the Federal Institute for Risk Assessment, whose responsibilities include providing impartial scientific advice and support for the law-making activities and policies of the federal government in all fields concerning food safety and consumer health protection, except for animal diseases. The Federal Institute for Risk Assessment performs risk assessments and communicates risk assessment results to the general public. According to officials, this agency was created to separate risk assessments from decision making. The purpose of this separation was to increase public confidence in risk assessments by distancing these assessments from possible political interference. A chronology of the consolidation of Germany’s food safety system follows: November 2000—The detection of BSE in German cattle and other food safety issues undermined consumer confidence in the food safety system. July 2001—The Federal Performance Commissioner’s Report, a study of the German food safety system, was produced by a German national audit office task force. The report’s three main recommendations were to (1) reorganize the Federal Ministry of Consumer Protection, Food, and Agriculture, (2) establish a coordinating body within the federal government, and (3) establish a scientific unit to perform risk assessments. December 2001—In response to the task force report’s second and third recommendations, administrative guidance issued by the Federal Ministry of Consumer Protection, Food, and Agriculture established the Federal Office of Consumer Protection and Food Safety and the Federal Institute for Risk Assessment as “institutes.” August 2002—The German parliament approved the Consumer Health Protection and Food Safety Restructuring Act, authorizing the creation of these two new food safety agencies. November 2002—The Consumer Health Protection and Food Safety Restructuring Act took effect. December 2003—The Federal Office of Consumer Protection and Food Safety presented the federal states with a draft general administrative regulation that would harmonize their food safety controls. As of October 2004, German officials expected these regulations to take effect in January 2005. The Consumer Health Protection and Food Safety Restructuring Act, which took effect in November 2002, separated the fields of risk management and risk assessment by authorizing the creation of Germany’s two new food safety agencies. According to officials, negotiations between the federal government and the federal states concerning reform of food safety law have been complicated, as some reforms that would give the federal government increased authority would require constitutional changes. According to officials, the Federal Office of Consumer Protection and Food Safety’s budgeted spending for 2004 was 25 million euros (about $31 million U.S.), and the budgeted 2004 spending for the Federal Institute for Risk Assessment was 47 million euros (about $58 million U.S.). In 2004, according to officials, the Federal Office of Consumer Protection and Food Safety had about 350 employees, and the Federal Institute for Risk Assessment had about 540 staff. The German food industry and consumer organization stakeholders we contacted support the consolidation. According to a representative of a major food industry organization, the German food industry supports the creation of the Federal Office of Consumer Protection and Food Safety because it has increased coordination of the federal states’ food safety activities and has improved Germany’s ability to respond to potential food safety crises. In addition, to further improve Germany’s ability to prevent potential food safety crises and in view of impending EU legislation, the food industry advocates increasing the Federal Office of Consumer Protection and Food Safety’s authority to coordinate the federal states’ food safety activities, thus enabling increased harmonization of food safety standards and control procedures across states. Moreover, the food industry representative stated that the separation of risk assessment and risk management has given the food safety system more credibility in the view of the public and industry. A representative of the consumer organization we contacted stated that the consolidation has made the food safety system more effective. In addition, the representative stated that the consolidation has increased German consumers’ confidence, but added that German consumers continue to have less confidence than consumers in other European countries. The consumer organization favors giving the Federal Office of Consumer Protection and Food Safety increased authority. In 1998, the Irish government enacted legislation creating the Food Safety Authority of Ireland. The Authority assumed all responsibility for food safety in July 1999. Officials stated that Ireland consolidated responsibility for food safety and food law enforcement within a single national agency to address public concern about food safety stemming from food scares and the detection of BSE in Ireland. Maintaining a strong food safety system is also extremely important for Ireland’s economy for several reasons. According to senior food safety officials, roughly 90 percent of the country’s meat and 75 percent of food is produced for export. If trading partners lost confidence in the food safety system, thus losing confidence in the food, exports could decline—even without a major outbreak. In addition, Ireland’s economy depends heavily on tourism, and outbreaks of foodborne illnesses could affect tourism and cause serious harm to the economy. Furthermore, when BSE was found in Irish cattle in the 1990’s, Irish consumption of meat declined as consumers questioned the effectiveness of the Department of Agriculture and Food, which was responsible for inspections of abattoirs, meatpacking plants, and farms. Also, some consumers perceived that the Department favored the interests of the food industry over consumer protection. According to officials, before the consolidation of Ireland’s food safety system, food safety functions were the responsibility of over 50 entities across the government, including six government departments, 33 local authorities, and eight regional health boards, with no central government authority to coordinate all of these entities. The Department of Agriculture and Food, inspected farms, slaughterhouses, and meat processing facilities for compliance with food safety regulations and was also responsible for the promotion of the agriculture industry. Local governments and regional authorities (e.g., health boards) had various other food safety responsibilities, such as inspecting meat plants producing for the home market, production and processing of food of nonanimal origin and the retail and catering sectors. In addition, multiple agencies were tasked with enforcing food safety legislation with no central accountability system in place to ensure that food safety legislation was being properly enforced or to coordinate food safety functions and activities across the food supply chain. As a result of a series of food scares in the 1990s, the Irish government undertook a review of its food safety system in 1996 to assure the safety and quality of their food products. The government’s review eventually led to the establishment of a lead food safety agency. The government established an interdepartmental committee to advise the Irish parliament on how the various food safety entities could best be coordinated. In early 1997, this committee recommended establishing a Food Safety Authority of Ireland. Under this recommendation, the responsibility for implementation of food safety laws would have remained with the existing agencies. The Authority would have audited these agencies and had a voice in setting and maintaining standards as well as in promoting good practices. Before this recommendation was enacted, a new government took office after an election in mid-1997. The new government believed that the Authority should be directly accountable for all food safety functions, including enforcement of food legislation. This proposal led to the establishment of the Food Safety Authority of Ireland (FSAI), formally established in law under the Food Safety Authority of Ireland Act, 1998, and beginning its official operation in January 1999. The legislation which established the FSAI provides for the transfer of all relevant staff to the new agency. Alternatively it provides that the FSAI can enter into a service contract with existing agencies for the enforcement of food legislation. As the likely personnel issues surrounding the smooth transfer of staff would have delayed commencement of FSAI’s food safety enforcement role, the service contract mechanism was used. At the time, roughly 2,000 staff, spread across more than 50 agencies, delivered food safety services throughout the country. Many staff had duties in addition to food safety responsibilities and therefore officials found it difficult to transfer “food safety” personnel to the Authority without disrupting other programs. FSAI is an independent, science-based body that reports to the Department of Health and Children. According to officials, the government deliberately placed the FSAI under the auspices of the Department of Health and Children rather than the Department of Agriculture and Food, as the former’s focus is on consumer health and protection, whereas the latter is associated with industry and trade development and promotion. A Board of 10 members, appointed by the Minister of Health and Children, governs FSAI although a chief executive officer leads day-to-day operations. In addition, the agency has a 15 member scientific committee that assists and advises the Board. See figure 10. FSAI is the single regulatory authority with responsibility for enforcing food safety legislation in Ireland. This responsibility is managed through service contracts with agencies performing food safety activities. FSAI has the responsibility to monitor and audit these agencies to determine how well they fulfill the tasks laid out in their service contracts. FSAI meets formally at least three times a year with each agency’s liaison to facilitate monitoring of the service contracts, and began auditing the agencies on their performance in fulfilling their service contracts, in the second half of 2004. FSAI has risk assessment, risk management, and risk communication responsibilities, including setting standards according to the scientific advice (put forth by the scientific committee), making risk management decisions with the agencies that are responsible for conducting food safety inspection and enforcement, and communicating risks to consumers, the food industry, and public health professionals. According to officials, FSAI’s responsibility for food law enforcement begins when food or animals are transported from the farm. In the fisheries and aquaculture sector, it has responsibility for food law at the level of primary production. However, feed safety and animal welfare are outside its jurisdiction. Procedures are in place to deal with food scares and food crises, should they emerge. Such crisis measures include a 24-hour emergency number, where local authorities can contact FSAI, as well as a memorandum of understanding between FSAI and all the agencies with food safety functions on how to coordinate during a crisis. In addition, FSAI is the national contact for the EU’s rapid alert food safety system. According to senior food safety officials, legislative reform of Ireland’s food safety laws was minor. The Food Safety Authority of Ireland Act, 1998, as well as establishing FSAI, also transferred authority for enforcement of existing food legislation and setting food safety and hygiene standards to FSAI. Although food law in Ireland dates back to the 1800’s, most of Ireland’s national food legislation today is derived from Ireland’s EU membership. According to officials, in deciding where to place the new food safety agency within the government, Ireland chose to place it under its existing Department of Health and Children specifically to separate food safety responsibilities from food and agriculture promotion efforts, which is the responsibility of the Department of Agriculture and Food. In addition, food safety agency officials had the overall role of bringing about the general understanding that the primary responsibility for food safety rests with the food industry. According to senior officials, FSAI works with all stakeholders towards this end. Industry stakeholders we spoke with stated they are now aware that they have such a responsibility. This change was, in part, due to the FSAI holding open forums with shellfish farmers, caterers, industry groups, and other stakeholders. The forums discussed problems and solutions, as well as advocated partnerships. FSAI officials estimated that FSAI spends 9.4 million euros (about $11.6 million U.S.) on food safety activities annually. Government departments, such as the Department of Agriculture and Food, still retain responsibility for policy and legislation and have separate budgets. In 2004, FSAI had 82 employees. According to industry stakeholders, FSAI has been successful in making the concerns and desires of consumers and retailers on food safety matters a higher priority than they were before the consolidation and making food safety a higher priority for industry by fostering open communication. Industry representatives stated these changes have been positive for industry. For example, stakeholders also stated that their positive relationship with FSAI has allowed industry organizations to be informed about discussions at the EU level and subsequently voice their position to FSAI on issues discussed at this level. A report published in October 2003 on industry attitudes toward food safety stated that 910 of 1,300 industry representatives surveyed (70 percent) considered food safer than it had been 10 years earlier. A consumer organization stakeholder cited several examples of why consumers support Ireland’s consolidation of its food safety system. For example, the official stated that FSAI is a single contact point for consumers when food safety concerns or questions arise. Moreover, the official said, the consolidation and creation of FSAI added accountability to food safety in Ireland, which did not exist before. As a result, said the official, consumers are more confident in the safety of the food supply, as well as more aware and knowledgeable about food safety. A report published in October 2003 on consumers attitudes toward food safety stated that more than half of 800 adult consumers (53 percent) surveyed considered food safer than it had been 10 years earlier. In 2002, the Netherlands moved an inspection office from the health ministry and an inspection office from the agriculture ministry to its new food safety agency, the Food and Consumer Product Safety Authority. According to Dutch officials, further consolidation is to occur by January 1, 2006, when the merger of the two inspection offices is to be completed. A need to reduce overlap and improve coordination among the Dutch government’s multiple food safety entities, as well as public concern about food safety stemming from the dioxin contamination of animal feed, BSE, and other animal diseases triggered the Netherlands’ decision to restructure its food safety system. Officials noted that the need to comply with recently adopted EU legislation also motivated the Netherlands’ consolidation. Before the Netherlands consolidated its food safety agencies in 2002, the country maintained two food safety inspection offices, each located in a different ministry. The Inspectorate for Health Protection and Veterinary Public Health (KvW) was in the Ministry of Public Health, Welfare and Sports. The National Inspection Service for Livestock and Meat (RVV) was in the Ministry of Agriculture, Nature and Food Quality. According to a senior food safety official, having food safety responsibilities divided between two different ministries caused overlap within the Netherlands’ food safety system. For example, both ministries had responsibilities for inspecting slaughterhouse facilities. Officials stated that communications between the two inspection agencies needed to be streamlined and duplication of inspection efforts needed to be reduced. In 2001, before beginning its 2002 consolidation, the Netherlands tried to address the problems associated with having two inspection offices by creating the Netherlands Food Authority, a small team of scientists who monitored the work of the two inspection offices, KvW and RVV. (See fig. 11 below.) However, according to officials, by 2002, both the Dutch parliament and consumer organizations wanted more guarantees for food safety inspections than could be offered by the Netherlands Food Authority. Therefore, in July 2002, the Netherlands converted the Netherlands Food Authority into a new food safety agency, the Food and Consumer Product Safety Authority, and placed both the RVV and KvW within the new agency. Initially, the Food and Consumer Product Safety Authority was housed under the Ministry of Public Health, Welfare and Sports, but in 2003 it was moved to the Ministry of Agriculture, Nature and Food Quality. (See fig. 12 below.) According to officials, moving the new food safety agency to the agricultural ministry increased its prominence. According to officials, the Food and Consumer Product Safety Authority’s core responsibilities cover three areas: (1) risk assessment and research— to identify and analyze potential threats to the safety of food and consumer products; (2) enforcement—to ensure compliance with legislation for meat, food, and consumer products, which may include nonfood items; and (3) risk communication—to provide information concerning risk and risk reduction, based on accurate and reliable data. The agency’s enforcement responsibilities include food, animal health, and animal welfare inspections. Senior food safety officials stated that the Netherlands’ consolidation efforts are not complete. According to officials, the two inspection offices will be merged into one by 2006. This single inspection office will consist of inspectors responsible for inspecting several types of food products. The Food and Consumer Product Safety Authority has begun training current inspectors in anticipation of this merger. According to an agency document, the Food and Consumer Product Safety Authority derives its responsibilities from various sources, including the Food and Consumer Product Safety Authority Organization Decree, dated July 10, 2002. To accomplish the move of inspection offices, the RVV and the KvW, within the Netherlands food safety system, officials stated that no major legal changes or new laws were needed. Only minor revisions in some laws, such as changing the name of the organization responsible, were necessary. Officials in the Netherlands faced three challenges in changing the country’s food safety system. First, the government had to decide what responsibilities and authorities the new food safety agency would have. Second, as discussed above, the government had to decide which ministry the new food safety agency would be placed in. The third challenge was an increase in employee attrition. For example, an official stated that attrition increased when the Food and Consumer Product Safety Authority was moved from the Ministry of Public Health, Welfare and Sports to the Ministry of Agriculture, Nature and Food Quality. According to officials, in 2004, the Food and Consumer Product Safety Authority’s budget was 188 million euros (about $232 million U.S.), and the agency’s workforce consisted of about 2,700 full-time equivalents. Officials also told us that the workforce would decrease to about 1,800 full-time equivalents by January 2006. Among the factors causing this reduction are the partial privatization of meat inspections and the reorganization and reduction of administrative and management personnel. Representatives of the fruit and vegetable, dairy, and livestock and meat industries all stated that their operations were not affected by the consolidation in the Netherlands’ food safety system. However, they all stated that the change was beneficial for consumers in that it clarified that the Food and Consumer Product Safety Authority was the responsible agency for food safety functions. The Food and Consumer Product Safety Authority performed a study of Dutch consumers’ confidence in the safety of food in 2002 and 2003. The study results show that consumers in both years had high confidence in food safety. In addition, one industry representative explained that as a result of moving the two inspection offices into a single agency, the two offices now have common goals. The New Zealand Food Safety Authority was established in July 2002. According to officials, the New Zealand Food Safety Authority (NZFSA) was established in July 2002 to improve the effectiveness of New Zealand’s food safety system by coordinating and harmonizing food safety efforts. Specifically, New Zealand wanted to address inconsistencies between the methods used in the Ministry of Agriculture and Forestry’s export food safety program and the Ministry of Health’s domestic food safety program. Before the consolidation, the Ministry of Agriculture and Forestry had food safety responsibilities for agricultural production, meat and dairy processing, food exports, and registration of agricultural compounds and veterinary medicines. The Ministry of Health was responsible for addressing health issues, as well as ensuring the safety of food sold on the domestic market, including imported food. According to officials, to address inconsistencies between the two ministries’ food programs, New Zealand's government consolidated food safety responsibilities of the two ministries into one semi-autonomous body attached to the Ministry of Agriculture and Forestry. NZFSA is now New Zealand’s controlling authority for domestic food safety and imports and exports of food and food-related products. It is responsible for administering legislation covering food for sale on the domestic market; primary processing of animal products and official assurances related to their export; exports of plant products; food imports; and the regulation of agricultural compounds, such as pesticides and fertilizers, as well as veterinary medicines. NZFSA has farm-to-table responsibilities—from primary production through processing to retailers, importing, and exporting, as well as responsibility for consumer education. According to officials, the export program’s purposes are to maintain and increase exports while providing assurances of food safety and keeping compliance costs under control. NZFSA’s organization includes a verification agency, which audits animal product facilities to verify that exporters are following agreed processes. According to officials, about 280 of NZFSA’s approximately 480 employees are in the verification agency. In addition, New Zealand and Australia share a trans-Tasman independent agency established under Australian law, the Food Standards Australia New Zealand, that develops food standards for composition, labeling, and contaminants that apply to all foods produced or imported for sale in New Zealand. Officials stated that existing food safety legislation required only very minor modification to create the New Zealand Food Safety Authority and authorize it to regulate food safety. However, officials stated that the total domestic food regulatory program is currently under review, and it was expected that quite extensive change would be needed as an outcome of this review. Legislative change is expected in late fiscal year 2005-2006. According to officials, adjustment to a new organizational culture was somewhat challenging for some employees. They said some employees from the larger organizations, particularly employees from the Ministry of Health, had difficulty assimilating into the culture of the new agency. Approximately 100 employees moved from the Ministry of Agriculture and Forestry, and 12 staff moved from the Ministry of Health into the new food safety agency. A second challenge for officials was deciding where within the government the agency would be located. NZFSA was established as a semi-autonomous body attached to the Ministry of Agriculture and Forestry. According to officials, its semi-autonomous status is intended to provide a level of separation from producers sought by the New Zealand public. In addition, the government had to decide whether to move certain food-related responsibilities to the new agency. For example, responsibility for human nutrition was kept at the Ministry of Health. According to officials, NZFSA’s budget for the fiscal year that ended June 30, 2004, was approximately $78 million New Zealand (about $53 million U.S.). A portion of NZFSA’s spending is financed by user fees assessed on industry for a range of regulator-provided services, including export certification, export audit arrangements, and market access efforts. Officials stated that NZFSA had approximately 480 employees in 2004. According to a consumer organization representative, before the creation of NZFSA, consumers were dissatisfied with the low priority both ministries placed on food safety. According to this representative, consumer organizations advocated changes in the food safety system, including the creation of a single agency dedicated to food safety. In 2003, about one year after its creation, NZFSA commissioned a study conducted by an independent research organization to provide benchmark information on food safety issues among New Zealand’s general public. The study revealed that a majority of respondents considered food safety standards to be improving, although concerns remain about specific foods, such as chicken; food outlets; and other food-related issues, including salmonella. Only one-third of the survey’s respondents stated that they were confident in the level of monitoring and enforcement of food safety standards. Despite these concerns, officials of a consumer organization stated that the creation of NZFSA was a very positive step that was strongly supported by consumers, and that the agency was too new for consumer confidence levels to have significantly increased at the time of the survey. An official representing a food industry organization in New Zealand stated that the organization, along with others, had advocated the establishment of a single food safety agency for years. The official stated the previous system was piecemeal and inefficient, due to coordination problems associated with two ministries having food safety responsibilities and neither ministry placing a high priority on food safety. As a result of the establishment of NZFSA, the industry is more confident in how the nation handles food safety. One official stated that as a result of the consolidation, the use of available resources for food safety activities is more efficient because food safety resources are located in one agency instead of fragmented between two ministries. In addition, the official stated that consumer confidence levels have improved due to an increase in the government’s responsiveness to food safety crises. According to the official, NZFSA has a responsive network that quickly delivers information to notify the public of food safety issues. Finally, the official stated that NZFSA has significantly improved transparency and remains committed to ongoing discussions with its many stakeholder groups. For example, in responding to reports of increased iodine levels in children, NZFSA began discussions immediately with endocrinologists, other doctors, and with food industry representatives to address the issue. In 1999, the Queen, by and with the consent of Parliament, enacted legislation to establish the independent Food Standards Agency, which went into effect on April 1, 2000. Officials stated that the United Kingdom consolidated its food safety system due to a loss of public confidence in food safety, which largely resulted from the government’s perceived mishandling of BSE. By early 1999, the human form of BSE, variant Creutzfeldt-Jakob disease, had caused 35 deaths. It was widely perceived that the fragmented and decentralized food safety system allowed this outbreak to occur. According to a consumer organization representative, consumers believed that the Ministry of Agriculture, Fisheries, and Food—which had dual responsibilities to promote the agricultural and food industry as well as to regulate food safety—favored industry over consumers in making decisions related to food safety. Before the reorganization of the United Kingdom’s food safety system in 2000, food safety responsibilities were divided among several central government departments, such as the Ministry of Agriculture, Fisheries, and Food and the Department of Health, as well as local authorities. The Meat Hygiene Service, a subunit of the Ministry of Agriculture, Fisheries, and Food was responsible for meat inspections, including enforcing hygiene in slaughterhouses. Other food inspections, conducted by local authorities, received no oversight from the central government. In 1999, to address public concerns, the Parliament passed the Food Standards Act of 1999 to establish the independent Food Standards Agency (FSA) as the country’s lead food safety agency. Officials stated that the core groups of employees that started with FSA were from the Ministry of Agriculture, Fisheries, and Food and the Department of Health. The Meat Hygiene Service was moved out of the Ministry of Agriculture, Fisheries, and Food and placed within FSA. In addition, FSA was granted audit authority over local enforcement. According to officials, FSA is responsible for scientific risk assessments, risk management, standard setting, education, and public outreach. In addition, its subunit, the Meat Hygiene Service, is responsible for meat inspections. For other foods, FSA forms inspection policy and audits local inspection authorities. Fruit, crops, and animal feed are also within its jurisdiction. FSA has no agricultural or food promotion responsibilities. FSA has the powers of an agency in a ministry, but is not part of a ministry. However, according to officials, the agency is held accountable to the Westminster Parliament and devolved administrations in Scotland, Wales, and Northern Ireland through Health Ministers. An independent Board that consists of a Chairman, a Deputy Chair, and up to 12 other members appointed to act collectively in the public interest, manage the FSA. The Board’s Chairman, who is appointed by the Secretary of State for Health; Scottish Ministers; the National Assembly for Wales; and the Department of Health, Social Services and Public Safety in Northern Ireland, determines food policy and holds discussions on policy issues in public meetings. The Food Standards Act of 1999 established the FSA. It classifies the FSA as an independent nonministerial government department and defines the agency’s functions and powers, including its function to monitor and audit the performance of local authorities and where necessary to exercise reserve powers over local authorities. The United Kingdom’s main challenge in consolidating was deciding which responsibilities to place in the new food safety agency. The government had to decide whether to (1) separate or combine food safety and nutrition, (2) include the Meat Hygiene Service within the new agency, and (3) include nonmeat inspections as a responsibility of the new agency or to sustain that authority with the local governments. Decisions on these issues were made after several debates in Parliament and considerable discussion among government officials and stakeholders from the food industry and consumer organizations. An additional challenge cited by FSA officials was to avoid duplication of efforts during the establishment of FSA and the termination of the Ministry of Agriculture, Fisheries, and Food. To address this challenge, a joint interim group was created to help reduce such duplication of efforts. According to officials, FSA’s annual budget is approximately 130 million pounds sterling (about $220 million U.S.); most of that amount is allocated for meat inspections. The food industry pays FSA about 30 million pounds sterling (about $51 million U.S.) annually in user fees for inspections. Officials stated that FSA’s workforce consists of approximately 3,000 employees. A consumer stakeholder stated that the establishment of FSA was an improvement to the food safety system because the agency has made the system more open and transparent than it was before the consolidation. Surveys of consumer attitudes on particular areas of the food safety system have been conducted, but no survey has been conducted to measure the confidence level of consumers for the entire food safety system. For example, this stakeholder stated that surveys conducted by a consumer association concluded that meat is still a concern for consumers, but the association has not conducted a survey to determine confidence levels over the entire food chain. The same consumer stakeholder also stated that FSA has increased public education about food safety. Industry stakeholders agreed that the establishment of a single, independent food safety agency has increased consumer confidence. A stakeholder stated that the most significant result of the consolidation was a shift from an industry focus to a consumer focus on food safety matters. Stakeholders also said transparency regarding the government’s oversight of food safety matters has greatly increased. In addition, one stakeholder noted that the consolidation resulted in increased accountability within the food safety system. However, industry stakeholders cited dissatisfaction with the new agency’s reporting on the testing of food products. One stakeholder stated that FSA collects product samples, tests them, and reports results without consulting companies. Another stated that the agency comments on food product studies before they are actually completed. In addition to those named above, major contributors to this report were Lawrence J. Dyckman and Kelli Ann Walther. Nancy Crothers, Barbara El Osta, Michele Fejfar, and Amy Webbink also made key contributions to this report.
The safety and quality of the U.S. food supply are governed by a complex system that is administered by 15 agencies. The U.S. Department of Agriculture (USDA) and the Food and Drug Administration (FDA), within the Department of Health and Human Services (HHS), have primary responsibility for food safety. Many legislative proposals have been made to consolidate the U.S. food safety system, but to date no other action has been taken. Several countries have taken steps to streamline and consolidate their food safety systems. In 1999, we reported on the initial experiences of four of these countries--Canada, Denmark, Ireland, and the United Kingdom. Since then, additional countries, including Germany, the Netherlands, and New Zealand, have undertaken consolidations. This report describes the approaches and challenges these countries faced in consolidating food safety functions, including the benefits and costs cited by government officials and other stakeholders. In commenting on a draft of this report, HHS and USDA said that the countries' consolidation experiences have limited applicability to the U.S. food safety system because the countries are much smaller than the United States. The two agencies believe that they are working together effectively to ensure the safety of the food supply. In consolidating their food safety systems, the seven countries we examined--Canada, Denmark, Germany, Ireland, the Netherlands, New Zealand, and the United Kingdom--varied in their approaches and the extent to which they consolidated. However, the countries' approaches were similar in one respect--each established a single agency to lead food safety management or enforcement of food safety legislation. These countries had two primary reasons for consolidating their food safety systems--public concern about the safety of the food supply and the need to improve program effectiveness and efficiency. Countries faced challenges in (1) deciding whether to place the agency within the existing health or agriculture ministry or establish it as a stand-alone agency while also determining what responsibilities the new agency would have and (2) helping employees adjust to the new agency's culture and support its priorities. Although none of the countries has analyzed the results of its consolidation, government officials consistently stated that the net effect of their country's consolidation has been or will likely be beneficial. Officials in most countries stated their new food safety agencies incurred consolidation start-up costs. However, in each country, government officials believe that consolidation costs have been or will likely be exceeded by the benefits. These officials and food industry and consumer stakeholders cited significant qualitative improvements in the effectiveness or efficiency of their food safety systems. These improvements include less overlap in inspections, greater clarity in responsibilities, and more consistent or timely enforcement of food safety laws and regulations. In addition to these qualitative benefits, officials from three countries, Canada, Denmark, and the Netherlands, identified areas where they believe financial savings may be achieved as a result of consolidation. For example, in the Netherlands officials said that reduced duplication in food safety inspections would likely result in decreased food safety spending and that they anticipate savings from an expected 25 percent reduction in administrative and management personnel. Although the seven countries we reviewed are much smaller than the United States, they are also high-income countries where consumers have very high expectations for food safety. Consequently, we believe that the countries' experiences in consolidating food safety systems can offer useful information to U.S. policymakers.
The F-35 Lightning II program, also known as the Joint Strike Fighter, is a joint, multinational acquisition intended to develop and field a family of next-generation strike fighter aircraft for the United States Air Force, Navy, and Marine Corps, and eight international partners. According to DOD, there will be three variants of the F-35: 1. The conventional takeoff and landing (CTOL) variant, designated the F-35A, will be a multirole, stealthy strike aircraft replacement for the Air Force’s F-16 Falcon and the A-10 Thunderbolt II aircraft, and will complement the F-22A Raptor (see fig. 1). 2. The short takeoff and vertical landing (STOVL) variant, designated the F-35B, will be a multirole, stealthy strike fighter that will replace the Marine Corps’ F/A-18C/D Hornet and AV-8B Harrier aircraft. 3. The carrier-suitable variant (CV), designated the F-35C, will provide the Navy a multirole, stealthy strike aircraft to complement the F/A-18 E/F Super Hornet. The Marine Corps will also field a limited number of F-35C CVs. Lockheed Martin is the primary aircraft contractor and Pratt & Whitney is the engine contractor. Although the acquisition costs of the F-35 program are about $400 billion, the most significant cost driver for the program is sustainment. The F-35 O&S costs are those incurred from the initial system deployment through the end of system operations, and include all costs of operating, maintaining, and supporting the fielded system. The F-35 program office develops an annual estimate for the O&S costs of maintaining and supporting the F-35 for 56 years. In its most recent estimate (2014), the program office estimates that it will cost about $891 billion to sustain the entire F-35 fleet over its life cycle. The Autonomic Logistics Information System (ALIS) is a system of systems that serves as the primary logistics tool to support F-35 operations, mission planning, and sustainment. ALIS helps maintainers manage tasks including aircraft health and diagnostics, supply-chain management, and necessary maintenance events. Lockheed Martin is the prime contractor for ALIS and is responsible for developing and managing the capabilities of the system, as well as developing training materials for ALIS users. According to DOD, ALIS will be co-located with F-35 aircraft both at U.S. military installations and in theater to support missions and assist with maintenance and resource allocation. ALIS consists of the overarching system, the applications housed within it, and the some of the network infrastructure required to provide global integrated and autonomic support of the F-35 fleet. It comprises both hardware and software. The hardware consists of three main components: The Autonomic Logistics Operating Unit (ALOU): The ALOU is the computer server that all F-35 data ultimately are sent through and it supports communications with and between the government and the contractor’s systems. The Central Point of Entry (CPE): The CPE is configured to provide software and data distribution for the entire F-35 fleet in the United States, enables interoperability with national (government) systems at the country level, and enables ALIS data connectivity between bases. Each international partner operating F-35 aircraft is expected to have its own CPE at other locations. The Standard Operating Unit (SOU): SOUs provide all ALIS capabilities to support flying, maintenance, and training. They also provide access to applications to operate and sustain the aircraft. As of February 2016, there was one operational ALOU and CPE within the United States. Each F-35 operating and testing site in the United States has a varying number of SOUs depending on the site’s number of aircraft and squadrons, and there are two versions: SOU V1 and SOU V2. The main difference between the two SOUs is that SOU V2 was designed to better meet participants’ deployability requirements. While SOU V1 was housed in two 1,600-pound server racks, SOU V2 was designed to have its components fit into transit cases that are two-man portable, each weighing approximately 200 pounds. DOD is planning to have at least one SOU accompany each F-35 squadron. The services organize their squadrons differently but squadron sizes generally range from 10 to 24 aircraft. The F-35 Operational Requirements Document, which originated in March 2000 and contains the performance and operational parameters for the concept of the F-35, calls for an incremental development of F-35 capabilities by aircraft software blocks and phased software releases during the system development and demonstration phase. This is concurrent to the production and fielding of small volumes of aircraft during low-rate initial production. ALIS’s software development is anticipated to be completed after two more of its versions are released, the first of which will support the Air Force declaring initial operational capability in summer 2016. The current fielding status of ALIS is illustrated in figure 2, which also includes the dates for when the next software upgrade—ALIS Version 2.0.2—is to be introduced at each F-35 site. ALIS 3.0, which is the version expected to meet the requirements defined in the Operational Requirements Document, is expected to have completed its development and commence testing by October 2017 in line with the end of system development and demonstration. The release is to be fielded under the low-rate initial production contract in early 2018 and is intended to be fully functional by 2019. DOD intends for ALIS software capabilities to include: operational planning, maintenance, supply-chain management, customer-support services, training, tech data, system security, and external interfaces. As stated earlier, ALIS is a system of systems—it comprises multiple software applications that perform specific functions for F-35 maintainers, pilots, supply personnel, and data analysts. These separate applications must be integrated within the system, as well as have interconnectivity with preexisting “legacy” information systems that are used by the services for other weapons system platforms. ALIS applications are being developed by the contractor incrementally, with some applications currently more functional than others. The F-35 program’s original requirements state that it must include a fully functional and effective logistics system to ensure operational readiness and availability. As of December, average aircraft availability for the year—the percentage of F- 35 aircraft capable of performing missions at a given time—was around 50 percent, whereas DOD had expected it to be 60 percent. Figure 3 includes major applications within ALIS, their intended purpose, and the F-35 program office’s assessment of the functionality status of each application. As figure 3 shows, according to DOD, most of the applications within ALIS currently have functionality issues. ALIS was originally scheduled to be completed for testing in 2010, as shown in figure 4. However, that same year the program triggered a Nunn-McCurdy breach, when unit cost growth exceeded critical thresholds. As a result, the F-35 program (including ALIS) was rebaselined in 2012, which established a new acquisition baseline, and Milestone B was recertified after the breach. The F-35 program is currently approaching the end of the system development and demonstration phase, during which ALIS has been developed, built, and tested to verify that operational requirements are being met. Concurrently, the program is in low-rate initial production. In addition, DOD and the services are in the process of declaring major milestones. Specifically, the Marine Corps declared initial operational capability in July 2015. The Air Force is scheduled to declare initial operational capability in August 2016, and the Navy is scheduled to do so in 2018. By 2018, after all services have declared their initial operational capability, the program, to include ALIS, is expected to reach full warfighting capability. The F-35 program plans to begin full-rate production in 2019, and according to DOD officials, any additional modifications or upgrades with new capabilities to ALIS will be part of the program’s follow-on modernization. DOD generally requires programs to have established sustainment and support systems, like ALIS, for the F- 35 by full-rate production. According to the F-35 Operational Requirements Document, by full-rate production, all variants must be able to deploy rapidly, sustain high mission reliability, and sustain a high sortie-generation rate. DOD is aware of risks that could affect ALIS but does not have a plan to prioritize and address them in a holistic manner to ensure that ALIS is fully functional as the F-35 program approaches key milestones— including Air Force and Navy initial operational capability declarations in 2016 and 2018, respectively, and the start of the program’s full-rate production in 2019. During our focus groups, ALIS users identified some beneficial aspects of the system. However, they also identified a variety of concerns, which may result in operational and schedule risks. The F-35 program office is aware of these issues, but is currently addressing them on a case-by-case basis. ALIS users (pilots, maintainers, administrators, and trainers) in the focus groups we held at five F-35 operational or testing sites identified some benefits of the system. For example, maintainers and administrators at three sites stated that they have seen ALIS’s capabilities improve over time as the system’s software has been upgraded. In addition, pilots and maintainers at three sites expressed confidence in ALIS’s future capabilities as the system continues to improve. Maintainers and trainers at three sites also found ALIS’s design—which incorporates multiple functions within one single system—helpful for efficiently executing tasks, when previously they had to work in multiple, separate systems. For example, trainers at Eglin Air Force Base stated that, with legacy aircraft, there are separate systems for tasks such as recording maintenance work and ordering parts. With the F-35, ALIS houses applications for these tasks within its system, which can be more convenient to users. In addition, maintainers at two sites stated that, following a recent software upgrade, the system processes information faster, which improves maintenance data input capabilities. Maintainers at three sites also told us that ALIS performs some tasks better than legacy systems. Specifically, maintainers at Eglin Air Force Base and Nellis Air Force Base explained that because ALIS stores information electronically, it eliminates the need for paper-based manuals commonly used on legacy aircraft. For instance, maintainers at Eglin Air Force Base noted that the technical data that they use to assist in aircraft repair is now stored electronically within ALIS and can be updated as necessary, whereas before this information was contained in multiple paper-based manuals, which were difficult to efficiently access and keep up-to-date. Most users we spoke with recognized that ALIS is a system in development and stated that its immaturity was to be expected at this stage in the program. However, during our focus-group sessions, ALIS users also identified several issues, which, if not addressed, could result in operational and schedule risks. Table 1 summarizes the risks reported by the majority of participants in our 17 focus groups. DOD is aware of these risks and, as discussed later, is addressing risks on a case-by-case basis. The risks identified by ALIS users during our focus group sessions are discussed in more detail below. ALIS may not be able to deploy: Pilots, maintainers, and administrators at three of the five sites we visited are concerned about ALIS’s ability to deploy and function in forward locations. For example, users are concerned about the large server size and connectivity requirements, and whether the system’s infrastructure can maintain power and withstand a high-temperature environment. The Marine Corps, which often deploys to austere locations, did not conduct deployability tests prior to declaring initial operational capability in July 2015. ALIS’s original requirements did not include specific deployability requirements, so the system’s original hardware design consisted of large, heavy racks of servers. DOD officials stated that the Marine Corps subsequently added specific requirements for a deployable system to meet its expeditionary mission needs. Although the more deployable version of ALIS was fielded in summer of 2015, DOD has yet to complete comprehensive deployability testing. In December 2015, the Marine Corps participated in an exercise at the Strategic Expeditionary Landing Field near the Marine Corps Air Ground Combat Center in California (also known as Twentynine Palms) that included a short-range, domestic deployability test of the system. According to DOD officials, the results were positive in that the Marine Corps transported the system to Twentynine Palms from its Yuma base and set it up within 2 hours; however, this test did not include long-range, overseas, ship-based, or combat scenarios. Air Force and Navy officials stated that they plan to conduct deployabilty tests prior to declaring initial operational capability over the next 2 years; however, these officials expressed concerns over the ability of ALIS to function in austere environments and in split-squadron situations that would require multiple deployable ALIS servers. ALIS does not have redundancy in its infrastructure: ALIS users at three of the five sites we visited are concerned that a failure in the system’s current infrastructure could degrade the system and ground the fleet. Currently, ALIS information, including data from all U.S. F-35 sites, flows from the Standard Operating Units (SOU) to a single national Central Point of Entry, and then to the lone Autonomic Logistics Operating Unit (ALOU). This data flow process has no back up system for continuity of operations if either of these servers were to fail. Specifically, squadron leadership at two sites expressed concern about how the loss of electricity due to weather or other damaging situations could adversely affect fleet operations if either the Central Point of Entry or the ALOU went offline. DOD officials told us that they recognize this issue and, for short-term losses of connection, ALIS users are able to work offline. Program officials also said that they are in the early stages of trying to procure up to two additional ALOUs and possibly relocating the Central Point of Entry to another F-35 site. However, as of January 2016, DOD officials had just begun to explore this option and have not allocated any resources to support the idea. ALIS does not effectively communicate with legacy aircraft systems: Maintainers and pilots at three of the five sites we visited were concerned that ALIS does not have much interoperability with legacy aircraft systems. For example, while ALIS was designed to house multiple applications within it, the Air Force, Navy, and Marine Corps will continue to use legacy systems to operate and maintain other weapon systems. The ability to share information between ALIS and these legacy systems is vital due to the way the services operate. In particular, Marine Corps officials noted that because the service operates with squadrons that use data from the Navy’s legacy system and ALIS, it would be beneficial for those two systems to communicate with one another. DOD officials stated that the services are responsible for developing the software interface that can take information from ALIS and translate it so that it can be communicated to the legacy systems. However, due to the lack of interoperability between ALIS and legacy systems, users are being forced to track data outside of ALIS, which, according to maintainers, is inefficient and could potentially result in data not being populated back into the system. Action Request process is inefficient and problematic: Maintainers at four of the five sites we visited told us that the current Action Request (AR) process does not allow for the effective reporting and resolution of F-35 aircraft and ALIS issues. Personnel use an application within ALIS to submit an AR about any F-35 problems, including those with ALIS itself, to the contractor for triaging and ultimate resolution. However, these maintainers explained that the process is too heavily controlled by the contractor and that users lack visibility of ARs submitted from other F-35 sites or squadrons. Consequently, ALIS users have to wait for the contractor to conclude that multiple sites are experiencing a similar issue, instead of being able to identify common issues across sites and obtain timely solutions. In addition, maintainers at three sites and administrators at one site reported that recent ALIS software upgrades have resulted in the contractor not receiving ARs, with users unaware of this problem until they followed up on the ARs’ status. DOD officials told us they are aware of the issues surrounding the AR process and are collecting information on the types of ARs submitted from all sites. They stated that the largest types of ARs are related to data quality and integration management, and that this has been the case for the last 12 months. ALIS has data accuracy and accessibility issues: ALIS users at all five sites we visited are concerned with data accuracy issues within the system, including missing or inaccurate data and inaccessibility of raw data within ALIS. Specifically, maintainers frequently have to resolve error messages for parts linked to electronic equipment logs that contain missing or inaccurate data when they try to fix a problem on the aircraft. Maintainers at two sites stated that recurring issues with electronic equipment logs have caused them to spend significant time resolving these issues instead of tending to other aircraft issues. Additionally, they stated that parts requiring scheduled maintenance are not being tracked or updated accurately in ALIS. Program officials stated that these are life-limited parts that must be replaced by a certain time frame to avoid safety risks to the aircraft. To mitigate this issue, maintainers are currently logging this information outside of ALIS. Maintainers at Eglin Air Force Base said that they are spending 13 hours on average every day to track this information. Finally, maintainers at three sites stated they would like the ability to access raw data in ALIS to produce service-related reports. DOD officials stated that ALIS was designed to be used across services and, as such, reporting tools are not necessarily service-specific. However, ALIS users that operate the system daily continue to have issues with accessing the data required to keep aircraft mission-capable and generating service-specific reports for their squadrons. Off-Board Mission Support and Training Management System applications are immature: Pilots and maintainers across all five sites we visited are concerned with the maturity and functionality of ALIS’s Off-Board Mission Support (OMS) and Training Management System (TMS) applications. OMS is a key application designed for pilots to conduct essential mission planning and debriefing. Specifically, pilots at all five sites thought that the OMS application was poorly designed, cumbersome, and not user-friendly, especially with providing the necessary information they need to conduct their missions. Due to OMS’s current lack of functionality, pilots at two locations stated that they are forced to track vital mission planning information, information expected to reside within ALIS, outside of the system. According to the Office of the Director of Test and Evaluation, OMS’s lack of functionality could have an effect on combat missions and operational tempo. The Training Management System (TMS) is designed for pilots and maintainers to track training qualifications and assign personnel to carry out specific tasks based on their qualifications. However, pilots and maintainers at four sites told us that TMS is immature and does not function as intended. Maintainers at one site explained that TMS is supposed to keep track of maintainers’ and pilots’ qualifications and, based on that information, assign proper permission levels and controls to a qualified maintainer to repair a problem on the aircraft. Instead, this is currently being tracked outside of ALIS, which is inefficient and could potentially result in this information not being populated back into the system. Security risks exist: ALIS users cited concerns related to the system’s security. For example, pilots at one location explained that compact discs have to be recorded to move classified information from the aircraft into the classified network, rather than the system transmitting the information automatically—a practice that they said poses security risks. In addition, the ALOU and Central Point of Entry, as discussed earlier, are potential single points of failure and could be a security risk. A 2012 DOD Inspector General’s report on ALIS also highlighted some security issues, including security accreditation and testing of hardware. Since that report, the F-35 program office has formed a team and developed a process to test, validate, and continuously monitor the security of ALIS applications and their interfaces with both military information networks and the contractor’s ALIS architecture. DOD is aware of the risks identified by ALIS users, as well as others, and is addressing some on a case-by-case basis. However, DOD officials acknowledged that the department does not have a plan that would prioritize and address key risks in a holistic manner as program milestones approach. In recent years, the F-35 program has emphasized the criticality of ALIS to the success of the F-35. In October 2015, the F-35 Program Executive Officer stated that ALIS is a crucial component of the F-35 and should be treated as its own weapon system. Additionally, the Program Executive Officer stated that the program office changed its organizational structure to provide more senior leadership oversight of ALIS. Although more focus has been given to ALIS, according to DOD officials, the F-35 Program Executive Officer has reiterated that the focus is on completing the current development and testing of ALIS within the already established time frames, and with the previously planned funding. As a result, this has created an environment of competing priorities and limited resources for the entire program in the near term, including ALIS. ALIS users have identified key risks to ALIS’s functionality that we highlight in this report, and program office officials acknowledge that others may exist as well. The F-35 program office has taken some actions in an attempt to address smaller ALIS functionality issues between major software upgrades, and is considering the procurement of additional ALIS servers to add redundancy to the system. However, the current approach does not prioritize issues in a way that clearly designates which issues must be addressed within the time left in the system development and demonstration phase, and which issues could be addressed later as part of follow-on modernization. GAO guidance and DOD best practices emphasize that, prior to meeting key milestones, a plan to address specific risks that may be associated with major weapon acquisitions should be developed. Specifically, GAO’s Schedule Assessment Guide states that a high-quality and reliable acquisition schedule includes the need to plan for and address major risks prior to meeting milestone dates. DOD’s System Engineering Guide for System of Systems includes risk management as a key aspect of system engineering. It helps to ensure that program costs, schedules, and performance objectives are achieved at every stage in the life cycle and helps to communicate to all stakeholders the process for uncovering, determining the scope of, and managing program uncertainties. Although key milestones—such as Air Force and Navy initial operational capability declarations and the start of full-rate production—are quickly approaching, DOD does not have a plan that prioritizes ALIS risks to ensure that the most important are expediently addressed and that ALIS is fully functional by these milestones. Furthermore, by continuing to address issues on a case-by- case basis, DOD risks that its solution to one issue could exacerbate another—for example, in addressing a security risk in isolation, DOD could inadvertently create further risks to data accessibility. According to F-35 program officials, a functional ALIS is key to the operational capability of the aircraft and the day-to-day ability to sustain the aircraft. Moreover, the department expects to significantly increase aircraft production within the next 5 years, so the number of aircraft that must be maintained and kept ready for flight will soon grow. By continuing to respond to issues on a case-by-case basis rather than in a holistic manner, there is no guarantee that DOD will address the highest risks, and as a result, DOD may encounter further schedule and development delays, including system upgrades, which could affect operations and potentially lead to cost increases. DOD has estimated total ALIS costs to be approximately $16.7 billion over its 56-year life cycle. However, the estimate is not fully credible since DOD has not performed uncertainty and sensitivity analyses as part of its cost-estimating process. Moreover, while DOD has updated its estimate to be reflective of some program changes, it is also not fully accurate since DOD did not use historical cost data—both actual data from ALIS and data from comparable programs—when developing its ALIS estimate. Finally, other costs such as service customizations of ALIS may require additional future resources, and manual workarounds to the system currently require additional labor resources. DOD estimates that total ALIS costs are about $16.7 billion—about $562 million to develop the system, about $1.1 billion to procure hardware and spare parts, and about $15.1 billion to sustain it in then-year dollars. DOD had expended approximately $505 million to develop ALIS as of December 2015, and the department estimates that continued development will cost an additional $57 million through 2017, which is when DOD expects ALIS 3.0 will be released for testing. In addition to the purchase cost for ALIS, DOD estimates that ALIS will cost about $15.1 billion to sustain over a 56-year life cycle. Program officials told us that ALIS development will be completed within the planned resources, but that the system will require follow-on modernization and that the program office is currently planning for those additional costs. Table 2 provides more detail on ALIS cost elements. We found that the program office’s estimate for ALIS sustainment costs minimally met the best practices for a credible cost estimate largely because it does not include uncertainty and sensitivity analyses (see app. II for more information on our assessment of the ALIS cost estimate). Every cost estimate contains a degree of uncertainty because of the many assumptions that must be made about the future. To mitigate this uncertainty, a variety of checks and analyses can be conducted to determine the credibility of the assumptions and the estimate as a whole. According to GAO’s Cost Estimating and Assessment Guide, cost estimates should include uncertainty analyses to determine the level of uncertainty associated with the estimate in order to be credible. A quantitative uncertainty analysis can provide a broad overall assessment of the risk in the cost estimate. In addition, credible cost estimates should include sensitivity analyses to examine how changes to individual assumptions and inputs affect the estimate as a whole. Although ALIS is a multibillion dollar system, it is not a formally designated stand-alone weapons system program; therefore, DOD is not required to perform a separate estimate of the system’s projected costs. ALIS is one cost element within the overall F-35 program, and the program office estimates that the system will constitute less than 2 percent of the $891.1 billion total sustainment costs of the program. Cost estimators told us that despite its critical importance in operating and maintaining the entire F-35 fleet, since ALIS constitutes a small portion of the total estimate, it is not considered a major cost driver of the program. Therefore, cost estimators told us DOD’s cost-estimating guidance does not require them to perform uncertainty or sensitivity analyses for ALIS. As part of the planning process for F-35 sustainment, program officials have examined the role of F-35 information systems (which includes ALIS) in relation to five major value streams—maintenance, supply chain, training, management, and sustaining engineering. While program officials said this process helped map the connections and interdependence between program elements, it did not quantify in uncertainty or sensitivity analyses the potential effects of ALIS on sustainment costs specifically. Additionally, DOD did not perform analyses to determine how further ALIS schedule delays or functionality issues could affect other F-35 costs. In lieu of the analyses, program officials stated that they assume that if ALIS does not perform as planned, aircraft could not be flown as frequently as intended, and this lower-than-expected utilization rate would therefore decrease sustainment costs. A 2013 DOD-commissioned study on reducing F-35 costs found areas of potential cost savings, but the implementation plan for this study also found that ALIS presents significant risk to the program. Particularly, this plan found that any functionality problems or schedule slippage with ALIS will have a significant impact on costs—with downstream additional costs due to performance and schedule delays potentially reaching up to $20-100 billion. The different conclusions drawn by the program office and this study suggest that DOD could better understand the effects that ALIS issues could have on overall program costs. Without performing uncertainty or sensitivity analyses, DOD will not understand how variabilities in ALIS-related assumptions could affect the estimate as a whole and the potential range of costs resulting from these variabilities. We also assessed the ALIS estimate for accuracy and found that DOD partially met the standards for an accurate cost estimate. GAO’s Cost Estimating and Assessment Guide states a cost estimate should be based on historical data—both actual costs of the program and those of comparable programs—which can be used to challenge optimistic assumptions and bring more realism to a cost estimate. While we found that the DOD substantially met some best practices for an accurate cost estimate by properly adjusting for inflation and not including mathematical errors, the estimate uses contractor-provided data for material costs instead of actual ALIS costs or historical cost data from analogous programs that would make the estimate more accurate. Cost estimating officials said that they did not base their ALIS estimates on historical cost data because they believe that there are no programs analogous to ALIS. For example, there is a logistics system for the Air Force’s F-22 program—also a fifth-generation aircraft—but officials stated that is far less complex than ALIS and does not include all of ALIS’s applications and intended functions. However, multiple versions of ALIS have been fielded since 2010 and using historical data on known ALIS costs, as well as analogous data from the F-22 or other programs, would make the estimate more accurately representative of likely sustainment costs (see app. II for more information on our assessment). GAO’s Cost Estimating and Assessment Guide also states that an estimate should be updated regularly to reflect significant changes in the program—such as when schedules or other assumptions change—so that it is always reflecting current status. The program office updates its estimate annually and incorporates program changes and evolving assumptions in these updates, documenting the changes from year to year. However, the program office does not update all elements of the cost estimate. For example, technology refresh accounts for approximately 25 percent of ALIS sustainment costs and program officials were able to tell us the assumptions used to calculate these costs, however, they were unable to tell us, or identify within the estimating model, where these data came from or when their underlying assumptions were developed. Additionally, over the course of our review, program officials highlighted some recent or upcoming program changes, such as the need for additional infrastructure, that were not included in their last estimate. The program office and services are exploring ways to decrease F-35 sustainment costs. For example, the services are taking different approaches to administering and maintaining ALIS, with the Marine Corps planning on using its own personnel, and the Air Force planning on using contractor support to administer the system, troubleshoot problems, and keep the system operating smoothly with software patches and other continuous improvements. Based on these service plans, the program office estimates that ALIS administration will cost more than $7 billion over the F-35 life cycle, in addition to about $1 billion for other contractor labor needs. Program officials said that these amounts may change based on upcoming sustainment decisions, including a potential way to decrease administrative costs by establishing regional centers that could provide this support to a number of F-35 sites rather than having contracted administrators at each site. Other program changes have the potential to increase ALIS costs and future estimates. Although DOD has included estimated costs of ALIS technology refresh and software licensing, program officials have stated that the current estimate does not sufficiently capture the full costs of follow-on modernization that will be required when upgrades or new versions of the system’s commercial off-the-shelf software are released. Because ALIS comprises multiple applications and interfaces with service networks and legacy information systems, engineers will have to reintegrate ALIS with other systems as well as the applications within ALIS in step with continuous upgrades and software improvements. Although DOD expects ALIS 3.0 to be the fully capable version that meets program requirements, officials at the program office and from all three services stressed that ALIS will not be a static system and that improvements and upgrades will not only be expected, but required, to keep ahead of technology obsolescence and evolve with emerging threat environments. Program officials stated that there is a need to build these follow-on modernization costs into the budgeting and cost-estimating processes. Program officials also stated that they may procure up to two more ALOUs for back-up and necessary redundancy and that, in addition to addressing the current risk of the single ALOU becoming a single point of failure, these additional ALOUs may facilitate greater government operation of ALIS and increase the potential for greater competition of future sustainment contracts. The program office bases much of its estimate on inputs from the services, such as their expected personnel needs and how they plan to operate and sustain their F-35 squadrons. The program plans to field one SOU for each F-35 squadron, and another SOU at each forward operating location. Since deploying squadrons do not normally transit to the forward location together as one unit but rather in a more staggered fashion, having an SOU at both the squadron’s home base and another prepositioned at the forward deployed location would avoid potential problems of having a split squadron sustained with just one SOU. Having additional SOUs positioned at forward deployed locations would likely increase procurement costs and the downstream costs of maintaining and replacing them. While the program office has been incorporating some program changes and adjusting its cost estimating assumptions as the F-35 program grows and evolves, it is important to continue this effort in its annual estimate updates to reflect current or planned program changes such as those described above. Additionally, as the program gains more experience operating ALIS, using historical data—especially actual program costs as they become available—as the basis for its estimates can result in a more accurate and realistic picture of ALIS costs. Unless DOD’s estimate is based on an assessment of the most likely costs, the estimate may not be representative of how much it will cost to sustain ALIS and may inhibit informed decision making. There are other ALIS-related costs that are not included in the estimate but that may place additional financial or manpower burdens on the services as ALIS continues to be concurrently developed and fielded. Services are responsible for bearing the costs of engineering any service- specific ALIS customizations. According to DOD officials, ALIS engineering changes must be agreed-upon by all services and partners in order to become new requirements. The program office then communicates these requested changes to the contractor software engineers developing ALIS. However, as ALIS users become more familiar with the system and its limitations, the services may request additional changes to the design of ALIS to meet their specific airworthiness and reporting requirements, and incur the additional costs to meet these needs. Some service-specific reporting requirements are being met through the use of time-consuming workarounds employed by ALIS users to compensate for current system limitations. According to program officials, some service-specific reporting requirements are not addressed by current ALIS functionality. For example, maintainers at several sites said that they must request maintenance data entered and housed within ALIS from the contractor because they cannot access this information independently. They then use these raw data to generate reports on aircraft status using software programs outside of ALIS because the system does not have the capability to extract and process the data in the form that the services require. Program officials told us that they expect some of these workarounds to become unnecessary after upcoming system improvements. Other ALIS functionality issues that create the need for workarounds do not have scheduled solutions, so both program and service headquarters officials are in the process of examining the use of these workarounds to determine whether they are truly necessary and will require an ALIS design change or whether service expectations should be better managed. Program officials told us that this workaround is not a problem with ALIS itself, but more an issue of the services requiring reports that the system was not designed to provide. Users at all the sites we visited told us that ALIS should have the functionality to create reports that they need. Program officials said that there is not an extra financial cost associated with these workarounds since they are performed by service personnel, but rather there is an opportunity cost to performing them—the additional time personnel spend on manually analyzing data and generating reports rather than performing their primary job duties. Neither the program office nor the services track the use of ALIS workarounds across all F-35 squadrons, but squadron leadership at Eglin Air Force Base provided examples of workarounds performed at their site including the estimated amount of time personnel must spend to overcome ALIS shortcomings. They estimate that personnel at that location alone spend approximately 150 man-hours per week on these workarounds—nearly the equivalent of having four full-time personnel dedicated to manually creating reports because, according to officials, ALIS currently cannot. Eglin Air Force Base personnel have created and updated 14 manual products mainly because of ALIS report limitations and because ALIS does not track information needed by senior leaders for unit-level analysis and subsequent maintenance and logistics decisions. They added that without these manual products, analysis, scheduling, and maintenance operations would suffer degradation and that mission accomplishment could be jeopardized. The use of workarounds varied across the F-35 sites we visited, and officials said that Eglin Air Force Base’s examples are not necessarily representative of other sites’ workarounds. They provide a sense of scale, however, for the labor burden placed on personnel because of current system immaturity and functionality issues. Personnel at F-35 sites we visited told us that the extra time they spend on ALIS workarounds detracts from the time they have to maintain proficiency in their specialties, prevents them from coaching and training their subordinates, and decreases the amount of time they have to perform collateral duties. They added that ALIS is not yet fully autonomic and will require significant additional system improvements for it to perform to their expectations. The use of these ALIS workarounds highlights the current immature state of the system and underscores the need for DOD to prioritize, and address key ALIS risks as discussed earlier in this report. As the F-35 program increases production over the next several years, sustained attention to addressing these issues and improving estimates of ALIS costs can help decision makers better direct resources and ensure that ALIS meets the needs of its users. DOD does not have a program-wide training plan for ALIS, but has taken initial steps to address training shortfalls. According to ALIS users at all five F-35 operational and training sites where we conducted our focus groups, training for ALIS is ineffective, and lacks a standardized, common curriculum for teaching users how to operate ALIS. Basic ALIS training courses are made available to all ALIS users at the F-35 Academic Training Center at Eglin Air Force Base. The training, according to ALIS users across all five sites, consists of a series of PowerPoint slides that are geared toward illustrating the conceptual nature of the system— showing how all the applications within ALIS are supposed to work— rather than how to actually operate the system as it currently exists. Furthermore, several ALIS users described the training as ineffective because it does not teach users how to explicitly operate specific ALIS applications. For example, the slideshow-based classroom training, which is optional for ALIS users, is not customized for the different users of ALIS. Pilots, maintainers, supply personnel, data analysts, and other F-35 specialists are required to use certain ALIS applications, and often in different ways based on their job requirements. Maintainers at two of the five sites said that the classroom training reputation has become so bad that many new maintainers choose to skip the courses and proceed directly to on-the-job training when they begin working on the F-35. From the outset of the original ALIS release in 2010, DOD has relied on the contractor to manage all ALIS training. ALIS training is currently heavily dependent on learning on the job, which consists of users learning how to operate ALIS from colleagues and through trial and error. According to DOD officials, this is not an uncommon practice for new systems within the department; however, those officials agree that training typically begins with a common classroom curriculum, and is supplemented with on-the-job-training. Almost every ALIS user in our focus groups noted that they do not learn how to operate any ALIS applications until on-the-job-training begins on the flight line. Specifically, the classroom training does not afford an opportunity to practice on “sandbox” or “ghost” systems that would simulate how ALIS is used on a daily basis by users. Instead, the practice comes in a live operational environment, where basic ALIS functions and practices are taught by supervisors or other users within the respective squadron. According to ALIS users at all five sites, this practice has led to users learning inconsistent methods and shortcuts for maneuvering through the different ALIS applications. In most cases, these practices differ between squadrons and services; however, there were also cases that some maintainers highlighted where users were learning different practices within the same squadron. ALIS users reported that they have created different workarounds to overcome ALIS functionality issues, but have also cited instances of not using ALIS as intended because the system is unwieldy and time-consuming. ALIS users said they are learning to operate the system in different ways and then perpetuating these methods, creating a situation where ALIS may not be operated in the most effective, efficient, or up-to-date way across all F-35 sites. Furthermore, they are learning to use the system in a live operational environment, running the risk of making errors that could ultimately affect aircraft availability. Program officials acknowledged the training shortfalls identified in the focus-group sessions. In response to these shortfalls, the program office has taken some initial steps to address some of the issues. Specifically, the program office, in concert with the contractor, has developed Mobile Training Teams to offer a way to train ALIS users outside of the F-35 Academic Training Center and at their specific F-35 sites. These teams are deployed to F-35 operational and testing sites to help keep ALIS users up-to-date with ALIS software version releases, which, according to ALIS users at all sites we visited, had been a significant problem with ALIS training. Specifically, according to ALIS users at all these sites, ALIS trainers and administrators are rarely up-to-date on the latest ALIS releases and functionality changes; therefore, it has led to inconsistencies in teachings and practices at the squadron level. Mobile Training Teams offer ALIS version-specific training at each site based on a sequencing schedule developed by the program office. In addition, the program office has rolled out an ALIS Training Evaluation to determine the current state of ALIS training across all F-35 sites. The end goal of this evaluative process is to identify underlying training deficiencies, and to develop corrective courses of action to mitigate these deficiencies. The process will include a series of site visits to include course audits, interviews, curricula inspections, and stakeholder surveys as applicable for root-cause analysis. As of January 2016, a team within the program office had just begun this process; therefore, they could not provide any details beyond the effort’s scope, methodology, and associated time frames. According to best practices of information-technology training, effective training of users is essential to the workforce supporting an information- technology system. The practices suggest that entities develop Strategic Learning Plans, or Overarching Training Plans, to help align training programs with priorities. Furthermore, as part of this process, the practices state that it is important that the training design and delivery process ensures learning occurs during the training and also ensures that the user applies the training on the job. Additional guidance on strategic training in the federal government states that training plans can aid in the performance of government programs. Specifically, a training plan can present a business case for proposed training and development investments, including the identified problem or opportunity, the concept for an improved situation or condition, and linkages with strategic objectives. According to DOD officials, ALIS training has been a difficult process to manage because of the dynamic nature in which ALIS has been developed and upgraded since its initial release. Because new versions of ALIS have been regularly released in a staggered manner across F-35 sites, they said it has been difficult to sufficiently train all ALIS users on the most up-to-date versions and teach consistent practices. However, with only one major version upgrade remaining prior to version 3.0 (the final major software release), issues related to constantly changing the system should decrease. A standardized, program-wide training plan could remove the emphasis from on-the-job training and provide a comprehensive, standardized training curriculum across the program. Without a program-wide training plan that assures that consistent learning occurs in the classroom, and is then applied by users on the job, the program runs the risk of continuing to allow users to learn irregular or incorrect practices through a training culture driven by on-the-job training, which could impact aircraft availability and safety. The F-35 program is DOD’s largest and most costly acquisition program to date. According to senior defense officials, not only does the F-35 program represent the future of tactical fighter aircraft in our military, but it is also vital to the security of our nation moving forward. With the Marine Corps already having declared the F-35 both deployable and combat- ready, the Air Force and Navy set to do the same within the next 2 years, and the full-rate production of the aircraft set for 2019, including the ramp up of sustainment activities, it is imperative that DOD address major risks associated with its central logistics system—ALIS. The program office has taken steps to identify risks associated with ALIS, including all of those identified by participants in our focus groups—and has begun, on a case- by-case basis, to address some of these risks; however, without a plan to prioritize risks and address them in a systematic and holistic manner, DOD runs the risk of having an ALIS that is not fully functional as it approaches key program milestones. Without a fully functional ALIS, DOD could face operational and schedule risks and potential cost increases to a program that is already the most expensive in DOD’s history. Although DOD has estimated the costs of ALIS, additional information would increase the accuracy and credibility of the estimate. While ALIS is projected to constitute less than 2 percent of the $891.1 billion total sustainment costs of the program, the financial impact that nonfunctional aspects of ALIS may have on the overall operations and sustainment of the aircraft could be significant. Until DOD does more analyses to determine the impact ALIS has (and the impact a nonfunctional ALIS could have) on overall sustainment costs, DOD will not know how much the costs of ALIS and overall sustainment could fluctuate. Furthermore, while the program office has been incorporating some program changes and adjusting its cost-estimating assumptions for ALIS, it is important for the program office to improve the reliability of its estimate by using historical data. It is also important for DOD to incorporate program changes that will likely affect ALIS costs in future estimates, such as decisions to enhance ALIS infrastructure or decrease planned numbers of administrative personnel with regional support centers. Finally, although DOD has recognized that ALIS training needs improvement and has made some temporary fixes to address the current shortcomings, DOD has yet to develop a program-wide training plan that would take the focus off an almost explicit on-the-job-training approach, and provide greater consistency among ALIS users. Considering the importance ALIS has to the operations and sustainment of the aircraft, and that DOD plans to purchase and operate nearly 500 more F-35s in the next 5 years, it will be important that ALIS users be on the same page with regard to operating the system. Further training inconsistencies could lead to impacts on aircraft availability and safety. We recommend that the Secretary of Defense direct the F-35 Program Executive Officer to take the following four actions: To ensure that risks associated with ALIS are addressed expediently and holistically, develop a plan that would prioritize and address ALIS issues, prior to the start of full-rate production for the program. To improve the reliability of its cost estimates, conduct uncertainty and sensitivity analyses consistent with cost-estimating best practices identified in GAO’s Cost Estimating and Assessment Guide. To improve the reliability of its cost estimates, ensure that future estimates of ALIS costs use historical data as available and reflect significant program changes consistent with cost-estimating best practices identified in GAO’s Cost Estimating and Assessment Guide. To ensure that ALIS training issues are fully addressed, develop a standardized, program-wide plan for ALIS training through the life cycle of the program. In written comments on a draft of this report, DOD concurred with two recommendations and partially concurred with two recommendations. DOD’s comments are summarized below and reprinted in appendix III. DOD also provided technical comments, which we have incorporated into our report where appropriate. DOD concurred with the recommendation that the Secretary of Defense direct the F-35 Program Executive Officer to develop a plan that would prioritize and address ALIS issues, prior to the start of full-rate production for the program. DOD stated that the F-35 program office began developing an ALIS Technical Roadmap in early 2016. The department added that at its completion later in 2016, this roadmap will be the foundation of a plan to identify, document, and prioritize ALIS risks; address them holistically; and inform budget priorities, as the program transitions from development into sustainment and follow-on modernization. Additionally, to mitigate the risk of single-point failures in the infrastructure, the program office contracted to acquire backup ALIS hardware in 2015; the backup hardware will be operational by early 2017. We state in our report that the department was aware of the risks to ALIS and believe that if DOD develops a plan to identify, document, and prioritize ALIS risks, address them holistically, and inform budget priorities prior to full-rate production, this action should address our recommendation. We further state in our report that DOD was in the early stages of acquiring backup ALIS hardware to mitigate the risk of single- point failures in the infrastructure, but had not yet allotted funding. We believe this backup system will be critical as the program approaches full- rate production. DOD partially concurred with the recommendation that the Secretary of Defense direct the F-35 Program Executive Officer to conduct uncertainty and sensitivity analyses consistent with cost-estimating best practices identified in GAO’s Cost Estimating and Assessment Guide. DOD stated that the department considers the sensitivity analyses that the F-35 program office performs to be a form of uncertainty analysis, as described in DOD’s Cost Assessment and Program Evaluation Operating & Support Cost Estimating Guide; however, the DOD cost estimating guidance does not require DOD to conduct a sensitivity or uncertainty analysis on ALIS since DOD does not consider ALIS a major cost driver of the F-35 program. As our report states, according to GAO’s Cost Estimating and Assessment Guide, cost estimates should include uncertainty analyses to determine the level of uncertainty associated with the estimate in order to be credible. A quantitative uncertainty analysis can provide a broad overall assessment of the risk in the cost estimate. A sensitivity analysis can examine how changes to individual assumptions and inputs affect the estimate as a whole. Although the F-35 program office may conduct analyses consistent with DOD cost estimating guidance, it has not conducted an uncertainty or sensitivity analysis specifically for ALIS. Although ALIS is projected to constitute less than 2 percent of the $891.1 billion total sustainment costs of the program, the financial impact that nonfunctional aspects of ALIS may have on the overall operations and sustainment of the aircraft could be significant. For example, a 2013 DOD-commissioned plan found that any functionality problems or schedule slippage with ALIS will have a significant impact on costs—with downstream additional costs due to performance and schedule delays potentially reaching up to $20-100 billion. We continue to believe that without completing uncertainty and sensitivity analyses to determine the effect ALIS has (and the impact a nonfunctional ALIS could have) on overall sustainment costs, DOD will not know how much the costs of ALIS and overall sustainment could fluctuate. DOD partially concurred with the recommendation that the Secretary of Defense direct the F-35 Program Executive Officer to ensure that future estimates of ALIS costs use historical data as available and reflect significant program changes consistent with cost-estimating best practices identified in GAO’s Cost Estimating and Assessment Guide. DOD stated that the department will ensure that the future F-35 program ALIS cost estimates continue to use the latest available historical cost data as appropriate and reflect the latest approved technical baseline when the program office incorporates these into the program of record, according to DOD’s Cost Assessment and Program Evaluation Operating and Support Cost Estimating Guide; however, we found that DOD was not using all available historical data. As our report states, according to GAO’s Cost Estimating and Assessment Guide, a cost estimate should be based on historical data—both actual costs of the program and those of comparable programs—which can be used to challenge optimistic assumptions and bring more realism to a cost estimate. While we found that DOD substantially met some best practices for an accurate cost estimate by properly adjusting for inflation and not including mathematical errors, the estimate uses contractor-provided data for material costs instead of actual ALIS costs or historical cost data from analogous programs that would make the estimate more accurate. We continue to believe that it is important for the program to improve the reliability of its ALIS estimate by using historical data to the greatest extent possible. It is also important for DOD to incorporate program changes that will likely affect ALIS costs in future estimates, such as decisions to enhance ALIS infrastructure or decrease planned numbers of administrative personnel for regional support centers. DOD concurred with the recommendation that the Secretary of Defense direct the F-35 Program Executive Officer to develop a standardized, program-wide plan for ALIS training through the life cycle of the program. DOD stated that, to address immediate issues, the F-35 program office deployed Mobile Training Teams to assist ALIS users at their home base locations, and to address longer-term issues, the program office began a comprehensive evaluation of ALIS training in 2015. According to DOD, the completion of this evaluation in 2016 will inform development of a plan to address long-term ALIS training issues. We agree and state in our report that the F-35 program has taken some positive steps to address short-term training shortfalls by deploying Mobile Training Teams as a way to train ALIS users outside of the F-35 Academic Training Center. We also report that the program has recently begun a comprehensive evaluation of ALIS training to determine the current state of ALIS training across all F-35 sites. If the F-35 program leverages this comprehensive evaluation, when it is completed, to develop a program-wide plan for ALIS training through the life cycle of the program, this action should address our recommendation. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in appendix IV. To determine the extent to which the Department of Defense (DOD) has a plan to ensure the Autonomic Logistics Information System (ALIS) is fully functional as the key F-35 program milestones approach, we reviewed documentation of program plans with relevant sustainment elements including the F-35 Global Sustainment Plan, the Weapon System Planning Document, the F-35 Autonomic Logistics Global Sustainment Concept of Operations, and the F-35 Operational Requirements Document. We also selected and conducted site visits at a nongeneralizable sample of five F-35 operational and training sites: Eglin Air Force Base, Florida Luke Air Force Base, Arizona Edwards Air Force Base, California Nellis Air Force Base, Nevada Marine Corps Air Base Yuma, Arizona We selected these sites in consultation with service officials to ensure we obtained perspectives across all three services and at both operational and testing sites. During these visits, we convened 17 non-generalizable focus-group sessions with a range of ALIS users from all three services to obtain information on the operability and deployability of ALIS, and how any ALIS issues may pose risks for F-35 operations and sustainment. Specifically, we convened groups of maintainers, pilots, system administrators, and trainers. We also held focus groups with contractor personnel responsible for training and administering ALIS at these sites. There were approximately a total of 120 participants in these focus groups. Table 3 includes a breakdown of the focus groups we held at the various locations. We worked with our methodologist to develop a focus group script that included questions across four main categories: Training, ALIS Positives, ALIS Negatives, and Risks, that was used at all five site visits. For consistency, our methodologist facilitated all focus group sessions. To analyze the focus-group responses, we conducted content analyses, developing categories and sub-categories and coding comments from each focus group to these categories. After each comment had been coded by an analyst, another analyst independently reviewed each code and either agreed or disagreed with the coding decision. Where there was disagreement in the coding decision, the two analysts discussed it and came to a resolution. Based on the content analysis, we described the overarching ALIS benefits and risks obtained from ALIS users about the system and any risks it poses to the F-35 program as it approaches key milestones. After obtaining information on DOD’s current approach to addressing the functionality issues that ALIS users identified, we evaluated information we obtained for consistency with best practices from GAO’s Schedule Assessment Guide and DOD’s System Engineering Guide for System of Systems that provide guidance and best practices on how, prior to meeting key milestones, a plan to address specific risks that may be associated with major weapon acquisitions should be developed. We also interviewed key DOD and contractor officials to collect information about building and testing ALIS, the capabilities of ALIS, metrics collected on ALIS’s development and performance, software upgrades to ALIS, and how the F-35 program office is addressing ALIS functionality issues. To determine the extent to which DOD has credibly and accurately estimated ALIS costs, we evaluated the reliability of DOD’s estimate of ALIS costs contained in the F-35 program office’s 2014 estimate of F-35 operating and support (O&S) costs, the most up-to-date estimate completed at the time of our review. The program office completed the 2014 cost estimate in the spring of 2015 and plans to release its 2015 updated estimate in spring of 2016. The Office of the Director for Cost Assessment and Program Evaluation (CAPE) also performed an F-35 O&S cost estimate in 2013, but has not updated it. Both CAPE and Joint Program Office officials told us they used the same cost inputs, methodology, and ground rules and assumptions when estimating ALIS costs in their respective estimates. We used characteristics contained in GAO’s Cost Estimating and Assessment Guide to assess the reliability of DOD’s estimate of ALIS costs. According to the guide, there are four general characteristics of sound cost estimating: being well-documented, comprehensive, credible, and accurate. For the purposes of this review, we conducted a limited assessment and used two of the four general characteristics of sound cost estimating included in this guide: being credible and accurate. We chose these characteristics because ALIS costs represent only one element of the total F-35 cost estimate—less than 2 percent of projected F-35 sustainment costs—and therefore we determined that it would not be appropriate to assess whether the estimate was comprehensive or well-documented. To determine whether the credible and accurate characteristics were met, we reviewed documentation used to generate the program office’s estimate, including data sources, assumptions, and cost models, and we interviewed cost- estimating officials from the program office and CAPE. We also interviewed other officials from the program office and service headquarters to discuss the cost effects of ALIS schedule delays and development issues. Results of our assessment of the estimate’s credibility and accuracy, along with descriptions of these characteristics and their associated best practices, are detailed in appendix II of this report. We found the data to not be fully reliable, which we discuss in further detail in the report and in appendix II. Finally, to determine the extent to which DOD has developed a plan to manage ALIS training for users, we reviewed key documentation related to ALIS and F-35 training, and used information from our focus-group sessions across various types of ALIS users from all three services to obtain information on the current state of ALIS training. We also interviewed key DOD and contractor officials. We evaluated all of the information we received using GAO-developed and industry best practices for information-technology training, and DOD’s Policies and Procedures for Acquisition of Information Technology. To address all of our objectives, we collected and analyzed information and interviewed officials from the following Department of Defense (DOD) offices: Office of the Under Secretary of Defense (Acquisitions, Technology and Logistics) Office of the Director for Cost Assessment and Program Evaluation (CAPE) Office of the Director for Operational Test and Evaluation (DOT&E) Department of the Air Force Department of the Navy Headquarters Marine Corps F-35 Joint Program Office We also collected and analyzed information and interviewed officials from Lockheed Martin in Fort Worth, Texas, and Orlando, Florida. We conducted this performance audit from April 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides and reasonable basis for our findings and conclusions based on our audit objectives. We completed an assessment of the ALIS costs in the F-35 program office overall F-35 O&S cost estimate on the basis of two characteristics—being credible and accurate—and their associated best practices derived from the GAO Cost Estimating and Assessment Guide. After reviewing documentation that the program office submitted for its 2014 F-35 ALIS O&S cost estimate, conducting interviews with program office cost-estimating officials, and reviewing relevant sources, we determined that these cost estimates are not fully reliable. While we found that the program office estimate for ALIS is partially accurate, the estimate is minimally credible. These evaluations are shown in table 4. We determined the overall assessment rating by assigning each individual best practice rating a number: Not Met = 1, Minimally Met = 2, Partially Met = 3, Substantially Met = 4, and Met = 5. Then, we took the average of the individual best practice assessment ratings to determine the overall rating for each of the two characteristics. The resulting average becomes the Overall Assessment as follows: Not Met = 1.0 to 1.4, Minimally Met = 1.5 to 2.4, Partially Met = 2.5 to 3.4, Substantially Met = 3.5 to 4.4, and Met = 4.5 to 5.0. A cost estimate is considered reliable if the overall assessment ratings for each of the two characteristics are substantially or fully met. If any of the characteristics are not met, minimally met, or partially met, then the cost estimate does not fully reflect the characteristics of a high-quality estimate and cannot be considered reliable. In addition the contact name above, the following staff members made key contributions to this report: Alissa Czyz, Assistant Director; Steven Banovac; Jeffrey Hubbard; Jason Lee; Jennie Leotta; Terry Richardson; Amie Lesser; Alyssa Weir; and Delia Zee. F-35 Joint Strike Fighter: Continued Oversight Needed as Program Plans to Begin Development of New Capabilities. GAO-16-390. Washington, D.C.: April 14, 2016. F-35 Joint Strike Fighter: Assessment Needed to Address Affordability Challenges. GAO-15-364. Washington, D.C.: April 14, 2015. F-35 Sustainment: Need for Affordable Strategy, Greater Attention to Risks, and Improved Cost Estimates. GAO-14-778. Washington, D.C.: September 23, 2014. F-35 Joint Strike Fighter: Slower Than Expected Progress in Software Testing May Limit Initial Warfighting Capabilities. GAO-14-468T. Washington, D.C.: March 26 2014. F-35 Joint Strike Fighter: Problems Completing Software Testing May Hinder Delivery of Expected Warfighting Capabilities. GAO-14-322. Washington, D.C.: March 24, 2014. F-35 Joint Strike Fighter: Restructuring Has Improved the Program, but Affordability Challenges and Other Risks Remain. GAO-13-690T. Washington, D.C.: June 19, 2013. F-35 Joint Strike Fighter: Program Has Improved in Some Areas, but Affordability Challenges and Other Risks Remain. GAO-13-500T. Washington, D.C.: April 17, 2013. F-35 Joint Strike Fighter: Current Outlook Is Improved, but Long-Term Affordability Is a Major Concern. GAO-13-309. Washington, D.C.: March 11, 2013. Fighter Aircraft: Better Cost Estimates Needed for Extending the Service Life of Selected F-16s and F/A-18s. GAO-13-51. Washington, D.C.: November 15, 2012 Joint Strike Fighter: DOD Actions Needed to Further Enhance Restructuring and Address Affordability Risks. GAO-12-437. Washington, D.C.: June 14, 2012. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-12-400SP. Washington, D.C.: March 29, 2012. Joint Strike Fighter: Restructuring Added Resources and Reduced Risk, but Concurrency Is Still a Major Concern. GAO-12-525T. Washington, D.C.: March 20, 2012. Joint Strike Fighter: Implications of Program Restructuring and Other Recent Developments on Key Aspects of DOD’s Prior Alternate Engine Analyses. GAO-11-903R. Washington, D.C.: September 14, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Is Still Lagging. GAO-11-677T. Washington, D.C.: May 19, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Still Lags. GAO-11-325. Washington, D.C.: April 7, 2011. Joint Strike Fighter: Restructuring Should Improve Outcomes, but Progress Is Still Lagging Overall. GAO-11-450T. Washington, D.C.: March 15, 2011. Tactical Aircraft: Air Force Fighter Force Structure Reports Generally Addressed Congressional Mandates, but Reflected Dated Plans and Guidance, and Limited Analyses. GAO-11-323R. Washington, D.C.: February 24, 2011. Defense Management: DOD Needs to Monitor and Assess Corrective Actions Resulting from Its Corrosion Study of the F-35 Joint Strike Fighter. GAO-11-171R. Washington D.C.: December 16, 2010. Joint Strike Fighter: Assessment of DOD’s Funding Projection for the F136 Alternate Engine. GAO-10-1020R. Washington, D.C.: September 15, 2010. Tactical Aircraft: DOD’s Ability to Meet Future Requirements Is Uncertain, with Key Analyses Needed to Inform Upcoming Investment Decisions. GAO-10-789. Washington, D.C.: July 29, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Joint Strike Fighter: Significant Challenges and Decisions Ahead. GAO-10-478T. Washington, D.C.: March 24, 2010. Joint Strike Fighter: Additional Costs and Delays Risk Not Meeting Warfighter Requirements on Time. GAO-10-382. Washington, D.C.: March 19, 2010. Joint Strike Fighter: Significant Challenges Remain as DOD Restructures Program. GAO-10-520T. Washington, D.C.: March 11, 2010. Joint Strike Fighter: Strong Risk Management Essential as Program Enters Most Challenging Phase. GAO-09-711T. Washington, D.C.: May 20, 2009. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-09-326SP. Washington, D.C.: March 30, 2009. Joint Strike Fighter: Accelerating Procurement before Completing Development Increases the Government’s Financial Risk. GAO-09-303. Washington D.C.: March 12, 2009. Defense Acquisitions: Better Weapon Program Outcomes Require Discipline, Accountability, and Fundamental Changes in the Acquisition Environment. GAO-08-782T. Washington, D.C.: June 3, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Joint Strike Fighter: Impact of Recent Decisions on Program Risks. GAO-08-569T. Washington, D.C.: March 11, 2008. Joint Strike Fighter: Recent Decisions by DOD Add to Program Risks. GAO-08-388. Washington, D.C.: March 11, 2008. Tactical Aircraft: DOD Needs a Joint and Integrated Investment Strategy. GAO-07-415. Washington, D.C.: April 2, 2007. Defense Acquisitions: Analysis of Costs for the Joint Strike Fighter Engine Program. GAO-07-656T. Washington, D.C.: March 22, 2007. Joint Strike Fighter: Progress Made and Challenges Remain. GAO-07-360. Washington, D.C.: March 15, 2007. Tactical Aircraft: DOD’s Cancellation of the Joint Strike Fighter Alternate Engine Program Was Not Based on a Comprehensive Analysis. GAO-06-717R. Washington, D.C.: May 22, 2006. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Defense Acquisitions: Actions Needed to Get Better Results on Weapons Systems Investments. GAO-06-585T. Washington, D.C.: April 5, 2006. Tactical Aircraft: Recapitalization Goals Are Not Supported by Knowledge-Based F-22A and JSF Business Cases. GAO-06-487T. Washington, D.C.: March 16, 2006. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance. GAO-06-356. Washington, D.C.: March 15, 2006. Joint Strike Fighter: Management of the Technology Transfer Process. GAO-06-364. Washington, D.C.: March 14, 2006. Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization. GAO-05-519T. Washington, D.C: April 6, 2005. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy. GAO-05-271. Washington, D.C.: March 15, 2005.
The F-35 is the most ambitious and expensive weapon system in DOD's history, with sustainment costs comprising the vast majority of DOD's $1.3 trillion cost estimate. Central to F-35 sustainment is ALIS—a complex system supporting operations, mission planning, supply-chain management, maintenance, and other processes. The F-35 program is approaching several key milestones: the Air Force and Navy are to declare the ability to operate and deploy the F-35 in 2016 and 2018 respectively, and full-rate production of the aircraft is to begin in 2019. However, ALIS has experienced developmental issues and schedule delays that have put aircraft availability and flying missions at risk. The National Defense Authorization Act for Fiscal Year 2016 included a provision that GAO review the F-35's ALIS. This report assesses, among other things, the extent to which DOD has (1) a plan to ensure that ALIS is fully functional as key program milestones approach and (2) credibly and accurately estimated ALIS costs. GAO reviewed F-35 program documentation, interviewed officials, and conducted focus groups with ALIS users. The Department of Defense (DOD) is aware of risks that could affect the F-35's Autonomic Logistics Information System (ALIS), but does not have a plan to ensure that ALIS is fully functional as key program milestones approach. ALIS users, including pilots and maintainers, in GAO's focus groups identified benefits of the system, such as the incorporation of multiple functions into a single system. However, users also identified several issues that could result in operational and schedule risks. These include the following: ALIS may not be deployable : ALIS requires server connectivity and the necessary infrastructure to provide power to the system. The Marine Corps, which often deploys to austere locations, declared in July 2015 its ability to operate and deploy the F-35 without conducting deployability tests of ALIS. A newer version of ALIS was put into operation in the summer of 2015, but DOD has not yet completed comprehensive deployability tests. ALIS does not have redundant infrastructure : ALIS's current design results in all F-35 data produced across the U.S. fleet to be routed to a Central Point of Entry and then to ALIS's main operating unit with no backup system or redundancy. If either of these fail, it could take the entire F-35 fleet offline. DOD is taking some steps to address these and other risks such as resolving smaller ALIS functionality issues between major software upgrades and considering the procurement of additional ALIS infrastructure but the department is attending to issues on a case-by-case basis. DOD does not have a plan that prioritizes ALIS risks to ensure that the most important are expediently addressed and that DOD has a fully functional ALIS as program milestones draw close. By continuing to respond to issues on a case-by-case basis rather than in a holistic manner, there is no guarantee that DOD will address the highest risks by the start of full-rate production in 2019, and as a result, DOD may encounter further schedule and development delays, which could affect operations and potentially lead to cost increases. DOD has estimated total ALIS costs to be about $16.7 billion over the F-35's 56-year life cycle, but performing additional analyses and including historical cost data would increase the credibility and accuracy of DOD's estimate. GAO's cost estimating best practices state that cost estimates should include uncertainty analyses to determine the level of uncertainty associated with the estimate in order to be credible. In addition, credible cost estimates should include sensitivity analyses to examine how changes to individual assumptions and inputs affect the estimate as a whole. DOD's guidance does not require the department to perform these analyses for ALIS, and DOD officials stated that they have not done so in part because ALIS constitutes less than 2 percent of the F-35's estimated total sustainment costs. Program officials said that if ALIS is not fully functional, the F-35 could not be operated as frequently as intended, but a DOD-commissioned plan found that schedule slippage and functionality problems with ALIS could lead to $20-100 billion in additional costs. Without uncertainty and sensitivity analyses, it is unclear how ALIS can affect costs. GAO also found that using historical cost data would make DOD's cost estimate more accurate. GAO is making four recommendations including that DOD develop a plan to address ALIS risks, and conduct certain analyses and include historical data to improve its ALIS cost estimate. DOD concurred with developing a plan and partially concurred with the cost estimating recommendations, stating that it follows its own guidance. GAO continues to believe the recommendations are valid, as discussed in the report.
In 1941, Congress enacted the Berry Amendment, which required that certain items procured for defense purposes be grown or produced in the United States. Specialty metals were added to the Berry Amendment in the early 1970s. The term “specialty metals” is defined to mean any of the following: Steel with a maximum alloy content exceeding one or more of the following limits: manganese, 1.65 percent; silicon, 0.60 percent; or copper, 0.60 percent; or containing more than 0.25 percent of any of the following elements: aluminum, chromium, cobalt, columbium, molybdenum, nickel, titanium, tungsten, or vanadium. Metal alloys consisting of nickel, iron-nickel, and cobalt base alloys containing a total of other alloying metals (except iron) in excess of 10 percent. Titanium and titanium alloys. Zirconium and zirconium base alloys. Specialty metals and their potential applications in DOD weapon systems include: steel alloys such as those used for ship hulls; metal alloys consisting of nickel and iron-nickel; certain cobalt base alloys such as samarium-cobalt alloy magnets used in radars; titanium and titanium alloy used in aircraft engine parts; and zirconium and zirconium base alloys used in gas turbine engines. In 2006, Congress passed the Fiscal Year 2007 NDAA which established specialty metals restrictions separate from the Berry Amendment. The provision generally requires DOD and its contractors to procure specialty metals produced or melted in the United States unless an exception applies which then permits the specialty metals to be obtained from foreign countries. Table 1 shows the federal regulations and DOD guidance that govern the elements of planning, compliance, and domestic source restrictions when procuring specialty metals for major weapon systems. DOD also has other exceptions available under the specialty metals restrictions, but they were beyond the scope of our review. As mandated, our review focused on national security waivers. Appendix II provides a list of the allowable exceptions to the specialty metals restrictions. Some of these exceptions include: National Security Waivers: When the use of noncompliant specialty metals in a weapon system is identified after it has been fabricated, the Secretary of Defense may determine in writing that acceptance of such an end item is necessary to the national security interests of the United States. The Secretary is allowed by statute to delegate this authority to the USD AT&L. DOD generally must notify Congress in advance of making these written determinations except in the case of an urgent national security requirement. Domestic Non-Availability: A domestic non-availability exception may apply if USD AT&L or the Secretary of the military service determines that compliant specialty metal of satisfactory quality, sufficient quantity, and in the required form, cannot be procured when needed at a fair and reasonable price. Qualifying Countries: This exception waives the requirement for procuring specialty metals produced in the United States if the acquisition relates to certain agreements with foreign governments, known as “qualifying countries.” Under the qualifying country exception, manufacturers in these countries have greater flexibility when procuring specialty metals for DOD procurements than U.S. manufacturers. Specifically, they can procure specialty metals from any source—including non-qualifying countries—while a component manufacturer in the United States must procure specialty metals from a source in the United States or a qualifying country. To implement the specialty metals restrictions, DOD established a clause in the DFARS. Generally DOD must include this clause in weapon system solicitations and contracts to require that contractors deliver items incorporating specialty metals in compliance with statute and regulation. Contractors are required to insert this clause, known as flow down, in their subcontracts that include items containing specialty metals, to the extent necessary to ensure compliance. In fiscal year 2009, changes were made to the specialty metals restrictions statute by the Fiscal Year 2008 NDAA, which eliminated an exception used to procure electronic components containing high performance magnets, with noncompliant metals from non-domestic sources. It also established a national security waiver exception—permitting the Secretary of Defense to accept delivery of an end item containing non-domestic specialty metals when the Secretary certifies that it is in the interests of national security. The Department of Defense (DOD) monitors the availability of specialty metals and conducts periodic quality assurance reviews; but played a limited role in planning for the procurement of specialty metals and ensuring compliance with specialty metals’ restrictions for the five programs we reviewed. Officials representing the weapon system programs that we reviewed typically rely on their prime contractors to plan for the procurement of specialty metals and to ensure compliance with these restrictions. We reviewed selected prime contracts for these programs and found they contained clauses requiring prime contractors to procure specialty metals in compliance with domestic source restrictions, ensure that delivered items meet contract technical requirements as part of quality assurance, and maintain processes for meeting future material needs. In turn, these prime contractors told us that they flow down the contract requirements—including those pertaining to specialty metals—to their suppliers, and require them to follow industry standards for quality management. Further, prime contractors for these programs told us they use a risk-based approach to oversee subcontractors, including those at lower tiers. DOD quality assurance staff also conduct periodic quality assurance reviews, primarily at the prime contractor level. DOD played a limited role in planning for specialty metals’ needs for the five programs we reviewed, and does so primarily through reports and assessments of availability of supply. Specifically, the Defense Logistics Agency conducts periodic assessments of strategic and critical materials for DOD, which help inform the department’s decisions to undertake risk mitigations such as stockpiling materials subject to potential shortfalls. In addition, the U.S. Geological Survey provides information to DOD on a variety of mineral commodities, which DOD uses in its analysis of materials availability; it also produces publicly available data on production and trends for some specialty metals and alloys used in specialty metals, including titanium. Specifically, the U.S. Geological Survey reported the price of titanium ingot increased by 250 percent between 2003 and 2006, before dropping to its average price range in the following 4 years. Due in part to the limits of worldwide production capacity, some specialty metals require a long lead time to produce, and specific grades of titanium procured for weapon system production may also require qualification and testing periods. Growing tensions in countries that the United States depends on for these metals, such as Russia, could potentially lead to additional availability risks. For example, a specialty metals industry official told us that samarium cobalt magnets are uniquely developed for DOD weapon systems and must undergo lengthy qualification and testing periods before their use in a weapon system. Further, in DOD’s most recent annual assessment of industrial capabilities, it stated that the industrial base upon which DOD relies has steadily become more global and diverse, and DOD does not control the supply chain that supports production. DOD MIL-STD-3018, Standard Practice for Parts Management (Oct. 27, 2011) and SD- 19, Parts Management Guide (December 2013). management guidance, and the prime contractor told us that this planning includes specialty metals. While neither the material management contract requirement for the programs in our review or the Family of Medium Tactical Vehicles program’s parts management contract requirement specifically discuss specialty metals, they both require contractors to do advance planning for the materials needed in future production, which would include specialty metals or parts containing them. The six prime contractors for the programs we reviewed reported that they conduct various activities to ensure future specialty metals parts for production will be available, including forecasts of specialty metals needed for future production and initiating advance purchasing agreements with specialty metals producers. Four of the prime contractors indicated they have had no difficulties in obtaining specialty metals needed for production from domestic sources or qualifying countries, but two reported encountering availability issues for some specialty metals, including titanium pipe, stainless steel, and some engine parts metals. Table 2 summarizes the six prime contractors’ planning activities for procuring specialty metals and the extent to which they have incurred availability issues. DOD officials for the five weapon systems programs we selected for review reported that they contractually require their prime contractors to comply with the specialty metals restrictions. Five of the six prime contractors, in turn, reported that they rely on their subcontractors for compliance by including the specialty metals restrictions in their subcontracts and purchase orders to the extent necessary to ensure compliance of the end products delivered to the government. The other prime contractor, responsible for the development of the KC-46 Tanker, said it directly procures specialty metals for this military aircraft, rather than relying on subcontractors, largely based on the existing design for its commercially available aircraft. In addition, we found that the prime contracts for four of the five DOD weapon system programs we selected for review require the prime contractor to adopt and use industry standards for quality management. For the remaining contract, although it is not specifically stated in the contract, the prime contractor indicated to us that they interpret their contract to require them to adopt and use industry standards for quality management. These prime contractors indicated they also require their subcontractors and suppliers to use these industry standards. These standards apply to the procurement of all parts, including specialty metals, and include: (1) evaluating potential subcontractors for inclusion on the contractor’s approved suppliers list; (2) reviewing required independent certifications or subcontractor certificates of conformance for items delivered under contract; (3) testing subcontractor parts and processes to determine if they meet contractual specifications; and (4) rating subcontractors on a routine basis using performance metrics such as product quality and on-time deliveries. Five of the six prime contractors that we spoke with reported that these industry standards for quality management were included in their contracts. For example, the prime contractor for the Family of Medium Tactical Vehicles reported that it performs systematic quality reviews of its subcontractors every 6 months to 2 years. As part of its review, it requires suppliers to provide certificates of conformance for parts procured and reviews material certifications from all first tier suppliers and some second tier suppliers to ensure industry quality standards are met. On the other hand, the prime contractor for the KC-46 Tanker reported that it directly handles procurement of specialty metals, rather than relying on subcontractors to ensure compliance, although it also requires its suppliers to adhere to industry quality standards. In addition, each of the six prime contractors in our review reported that they use a risk-based approach—based on factors such as product complexity and the subcontractor’s past performance—to determine the extent to which they conduct subcontractor oversight. DOD also oversees prime contractors’ purchasing processes and quality assurance activities, which can include reviews of whether prime contractors are complying with the specialty metals restrictions. These oversight activities are carried out by staff from DCMA—for the Army and Air Force—and from the staff of the U.S. Navy Supervisors of Shipbuilding, Conversion, and Repair (Navy) organizations. The FAR requires DOD contracting officers to determine whether contractor purchasing system reviews are needed, and if so, to conduct a review with the objective of evaluating the efficiency and effectiveness with which the contractor spends government funds and compliance with government policy when subcontracting. The DFARS also requires contracting officers to evaluate whether the contractor’s purchasing system is capable of ensuring that all applicable contractor purchase orders and subcontracts contain all terms, conditions, DFARS clauses— including the flow down of the specialty metals’ restriction—and any other This clauses needed to carry out the requirements of the prime contract.review provides a basis for granting, withholding, or withdrawing approval of the contractor’s purchasing system. After an initial review, contracting officers are to determine, at least every 3 years, whether subsequent reviews are required. According to DCMA guidance on supplier risk management, DCMA is to identify risk levels for DOD suppliers, and to use these ratings in developing government contract quality surveillance plans and allocating resources to perform them. This guidance also provides that government oversight of sub-tier suppliers may occur if warranted. DCMA receives its direction and expectations for contract oversight from the DOD program office, which can include access to subcontractor invoices at any tier, if the program deems this necessary. In addition, the FAR provides that, for major system acquisitions, the contracting officer may designate certain high risk or critical subsystems or components for special surveillance. DOD and the prime contractor can also arrange for access in the event that noncompliance issues are identified to help ensure corrective actions are taken. Moreover, the FAR allows contracting officers to accept a contractor’s certification that the delivered items meet requirements rather than performing an inspection of these items. However, if any delivered items are noncompliant, the FAR generally provides that any defect in items provided under a fixed-price contract is to be replaced, corrected, or repaired by the contractor, potentially at their own expense.reviewed the six prime contracts for the programs in our review and each contract contained clauses related to the correction of deficiencies in items delivered by the contractor. The specialty metals restrictions allow the Secretary of Defense or certain delegated officials to grant national security waivers, among other things, for the use of noncompliant specialty metals. Since 2009, we found DOD has granted six national security waivers permitting the use of noncompliant specialty metals on five different weapon system programs, including the Joint Strike Fighter. During our review and, in part, in response to our discussions with USD AT&L officials, DOD developed written guidance for program offices to follow when requesting national security waivers. However, DOD lacks a mechanism to share information contained in these waivers with key stakeholders, within the department and to its supplier-base, on national security waivers granted for noncompliance with the specialty metals restrictions. Without this information, DOD and its contractors and suppliers could be limited in their awareness of, and actions to mitigate, similar supply chain issues. The specialty metals’ restrictions provide authority to the Secretary of Defense or certain delegated officials, including USD AT&L, to waive compliance with the specialty metals restrictions. To do so, the Secretary or delegated official must determine, in writing, that acceptance of noncompliant specialty metals is necessary to the national security interests of the United States. Waivers in the interest of national security can be only be granted after noncompliance issues are identified, and only allow the department to accept the delivery of noncompliant end items that have already been procured, rather than in advance of their procurement. Since 2009, DOD granted six national security waivers for the use of noncompliant specialty metals on five different weapon system programs—five of which were granted after contractor disclosure that noncompliant specialty metals were, in some cases, contained in the end items delivered to DOD. The remaining waiver (for the Standard Missile-3 Block II-A program) was granted as a result of an international agreement that did not address specialty metal restrictions. Table 3 shows the six national security waivers approved for five weapon system programs since 2009 when the DFARS was revised to remove a previous exemption for high performing magnets, as provided by the National Defense Authorization Act for Fiscal Year 2008. Specifically, in 2009, DOD eliminated an exception in the DFARS that allowed electronic components containing noncompliant high performance (samarium-cobalt) magnets to be procured from non- domestic sources. However, the companies we spoke with that were involved in requesting the national security waivers indicated they had not updated their purchasing processes to reflect this DFARS change, resulting in noncompliance with specialty metals restrictions. As a result, five of the six national security waivers were granted due to noncompliant samarium-cobalt magnets procured by three companies. For each of the five programs that procured noncompliant samarium-cobalt magnets, their national security waivers stated that these magnets met the necessary performance capability requirements for the weapon systems for which they were procured. The time frame for the contractor’s discovery of noncompliant specialty metals to when the waiver was approved by DOD, ranged from two months for the Joint Strike Fighter parts for positioning external doors and nose and main landing gear, to 10 months for the Ground/Air Task Oriented Radar program. According to the specialty metal statute, weapon system contractors and subcontractors that are noncompliant with the specialty metals restrictions and receive a national security waiver must develop and implement an effective plan to ensure future compliance, also known as a corrective action plan. The prime contractor for the Joint Strike Fighter program told us that it has, as well as its subcontractors, submitted corrective action plans. In addition, the prime contractors for the F-16 Block 52 aircraft and the B-1 Bomber Reliability and Maintainability Improvement Program have also submitted a corrective action plan. There is no plan for the Ground/Air Task Oriented Radar, as a result of the prime contractor correcting the noncompliance before the program accepted delivery of the radar systems regarding the need for the waiver. DOD did not also require a corrective action plan for the Standard Missile-3 Block II-A, because the noncompliant components were provided under the terms of a bilateral agreement and were not procured by a U.S. prime contractor. The waiver stated that the specialty metals restrictions still apply because the specialty metals’ components are to be integrated with U.S.-acquired missile components. In addition, in our review of national security waivers approved by DOD, we found that these waivers contained the elements required by the specialty metals restrictions. Specifically, these included (1) written determinations stating that accepting delivery of an end item containing noncompliant materials is necessary to U.S. national security interests; (2) approval by the USD AT&L or a higher-ranking official; and (3) statement indicating the quantity of end items and the time period to which the waiver applies. However, we found that DOD’s process for granting national security waivers for specialty metals had some weaknesses. For example, a DOD program office that was in the final steps of submitting a national security waiver told us they had difficulty in determining what documentation to include with their waiver request. We also found that the DFARS guidance at the time that the existing waivers were submitted did not specify the types of documents required from the requesting program office to support a waiver request. In June 2014, USD AT&L developed written guidance for program offices to include how and when the noncompliance was discovered, a complete description of all of the items or systems containing noncompliant specialty metals, the manufacturer and country of origin of the noncompliant material—if known, and estimated cost and schedule estimates to replace the noncompliant parts if a national security waiver is not granted. This DOD guidance also requires the disclosure of whether the specialty metals DFARS clause was flowed down to subcontractors and whether safety and operational implications exist. In addition, to prevent misinterpretation of the current flow down requirement, DOD recently proposed amending the specialty metals DFARS clause to clarify the current requirement that the specialty metals restrictions be flowed down in subcontracts. The comment date for this proposed change to the DFARS ended in August 2014, and if finalized, the clause would apply to DOD contractors. Further, DOD has been directed by a House report accompanying H.R. 4435, the Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015, to report on the department’s use of national security waivers, including documentation for all of the waivers issued since 2008 and information regarding the procedures used to issue these waivers, by December 1, 2014. This report directs the Secretary of Defense to report on DOD’s procedures to determine whether (1) to issue a national security waiver; (2) a supply deficiency is best addressed through the national security waiver or through the availability exception; (3) noncompliance by contractors and subcontractors is “knowing or willful,” and (4) further action by DOD is necessary to prevent the recurrence of the supply chain issue that led to noncompliance and the subsequent issuance of a national security waiver. The report also directed the Secretary of Defense to report on the procedures used by DOD to monitor contractor compliance. According to a senior USD AT&L official, DOD has begun work to fulfill this reporting requirement. In our review of the six national security waivers granted by DOD, we found that these waivers include details, such as the company’s plan for procuring specialty metals that comply with the restrictions, the production units that will be compliant in the future, and estimated time frames to re- qualify suppliers and retest equipment. DOD may also require the prime contractor to pay associated costs for noncompliance with the specialty metals restrictions related to delivered items. When this occurs, the program office’s contracting officer determines the amount that is appropriate to be paid based on the nature and scope of the noncompliance. Further, for the B-1 Bomber Reliability and Maintainability Improvement Program and the F-16 Block 52 Program, the waivers specify that all costs associated with non-compliance are unallowable, including any directly associated costs incurred by their contractors. Specifically, the waivers approved by USD AT&L required the contractors for these two programs to provide consideration to the Air Force for the costs that may include the design, testing, and installation of compliant components to remediate the noncompliant items and any additional costs incurred to obtain compliance for the aircraft radar systems affected. For the remaining waivers, USD AT&L’s decision on whether to request contractor consideration is pending. The specialty metals restrictions require that DOD make a determination of whether the noncompliance by the contractor or subcontractor was knowing or willful. Specifically, DOD has ordered investigations of four of the six national security waivers to determine whether the noncompliance was knowing or willful, and the results of these investigations are pending. For the other two waivers, DOD did not conduct an investigation, but nonetheless determined that the noncompliance was not knowing or willful. According to officials at USD AT&L, it is in the Under Secretary’s discretion to request an investigation to assist in making their determination, if the facts surrounding the noncompliance warrant it. Further, DOD can consider suspending or debarring a contractor or subcontractor whose noncompliance has been determined to be knowing or willful, until the issues that led to noncompliance have been effectively addressed. DOD has not taken this action for any of the contractors in our review. Table 4 summarizes the status of the programs’ corrective action plans for their national security waivers and DOD’s determination of whether the noncompliance with the specialty metals restrictions was knowing or willful. The specialty metals restrictions further require DOD to provide advance congressional notification in the form of a written determination to the defense committees before executing a national security waiver determination.waivers and provided little advance notice for another program. Specifically, we found that DOD did not notify Congress for the Standard Missile-3 Block II-A program waiver and the B-1 Bomber Reliability and However, DOD did not notify Congress for one of the six Maintainability Improvement Program notified Congress on the same day the waiver was signed. According to USD AT&L officials, the delays in notifying Congress of the approved national security waivers, in these instances, were due to the lack of a central focal point for the waivers. As a result, USD AT&L has recently centralized the processing of all waiver requests within DOD. Standards for Internal Control in the Federal Government call for information and communications to be recorded and communicated to others, such as stakeholders who need it, to help the agency achieve its goals. However, we found that DOD lacks a mechanism to share information with key stakeholders, within the department and to its supplier base, on national security waivers granted for noncompliance with the specialty metals restrictions. Sharing information on specialty metal waivers with these key stakeholders could heighten the awareness of other programs and their suppliers who are not directly involved with the waiver of the risks as well as the consequences, and to look for similar issues in their own programs. Moreover, other program offices that work with the same supplier-base could benefit from this information. For example, in the case of the Joint Strike Fighter program’s noncompliant samarium cobalt, DCMA created a notification report in February 2013. According to a deputy director within DCMA’s Industrial Analysis Center, reports such as this were possibly disseminated throughout DCMA, and he stated DCMA staff may have shared these reports with the program offices to which they were assigned. In November 2013, DOD discontinued these reports. Continued disseminations of this type of information and sharing them among the DOD weapon system programs and their supplier-base could heighten awareness, potentially averting future noncompliance with the specialty metals’ restriction. Further, GAO’s prior work in October 2008 concluded that DOD often becomes aware of supplier base problems through informal channels, and greater visibility of supply chain issues or vulnerabilities could contribute to more formal mechanisms for addressing supply chain risks. We recommended that DOD create and disseminate written requirements for reporting potential concerns about supplier-base gaps. DOD addressed this recommendation, and in its most recent instruction on Defense Industrial Base Assessments, dated July 2014, it states that if an industrial capability is identified as endangered, DOD components must, among other things, validate whether the industrial capability is relevant to a satisfaction of a national security requirement. This instruction also states that to facilitate efficient and effective sharing, a repository of reports, information, and data will be established by DOD and will contain a searchable index of reports. Moreover, DOD’s 2014 Chief Freedom of Information Act Officer’s Report to the Department of Justice states that the department continues to implement open government principles as defined by an Office of Management and Budget memorandum, which encourages greater transparency, participation and collaboration by publishing non-sensitive government information online for public review. Establishing a mechanism to disseminate non-sensitive information—including the names of programs that received waivers, sources of the noncompliant specialty metals, and corrective actions—to key stakeholders, such as DOD weapon system program offices and their defense suppliers—could help raise awareness of and greater compliance with the specialty metals restrictions. DOD relies on its prime contractors and subcontractors to plan for and ensure that specialty metals procured for weapon systems meet the requirements of the specialty metal statutory provisions and contractor self-disclosure is the primary way that DOD becomes aware of noncompliance with this statutory provision. Some recent breaches of domestic source restrictions have led contractors to notify the government of noncompliance issues and since these instances occurred, DOD has granted national security waivers for the affected programs. DOD has recently defined procedures for requesting national security waivers for weapon systems programs. However, DOD’s lack of a mechanism to disseminate information to its key stakeholders within the department’s weapon system program offices and their supplier-base on national security waivers granted makes it difficult for others not directly involved with the waiver request to have awareness of the risks as well as the consequences. Further, greater awareness of supplier-base problems and broader dissemination of information could assist DOD in better discovering vulnerabilities, such as systemic supply chain risks that affect national security objectives. To provide greater awareness of and compliance with the specialty metal restrictions among DOD weapon system programs and their defense supplier-base, we recommend that the Secretary of Defense establish a mechanism for sharing and distributing non-sensitive information about national security waivers throughout the department and the defense supplier-base. DOD provided us with written comments on a draft of this report, which are reprinted in appendix III. In its comments, DOD concurred with our recommendation to provide greater awareness of and compliance with the specialty metal restrictions, stating that it will post non-sensitive specialty metals national security waiver information on the Defense Procurement and Acquisition Policy website. This publicly available information would include the number of national security waivers approved by DOD since 2009; the names of DOD weapon system programs receiving them; guidance on how to request a national security waiver, as well as guidance for applying specialty metals’ restrictions to high performance samarium-cobalt magnets. We believe that this is a positive step towards enhancing the public transparency of future national security waivers for non-compliant specialty metals. DOD is considering resuming the issuance of notification reports when a specialty metals’ noncompliance has occurred. We believe that disseminating this information across DOD could help officials working on other DOD programs assess whether similar non-compliance issues may affect them. We also received technical comments from the Department of the Interior and the DOD prime contractors included in our review, which we incorporated into this report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Interior, and other interested parties. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report focuses on planning, compliance, and national security waivers for specialty metals used in Department of Defense (DOD) weapon systems. Specifically, we assessed (1) how DOD meets its needs for specialty metals parts and ensures compliance with the specialty metals restrictions, and (2) DOD’s process for providing national security waivers for specialty metal procurements and the extent to which it disseminates waiver information throughout the department. To assess how DOD officials meet their needs for specialty metals used in DOD weapon systems and ensure compliance with specialty metals restrictions, we examined laws and regulations regarding specialty metals domestic source restrictions, including the Defense Federal Acquisition Regulation Supplement (DFARS) and DOD guidance relating to planning for and compliance with specialty metals and related requirements of the Federal Acquisition Regulation (FAR). We reviewed DOD guidance on planning for weapon systems procurements and manufacturing; guidance on purchasing system reviews and quality assurance from the Defense Contract Management Agency and U.S. Navy Supervisors of Shipbuilding, Conversion and Repair; Defense Logistics Agency’s reports on strategic and critical materials that identified materials with projected availability risks; and U.S. Geological Survey’s price information for titanium to identify historical fluctuations in pricing and availability for specialty metals. We also interviewed officials from DOD, including the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD AT&L), the Defense Contract Management Agency (DCMA), and selected weapon system programs, as well as their prime contractors, two selected subcontractors, and specialty metals company representatives. We ascertained their roles in determining the needs for specialty metals parts and monitoring compliance with specialty metals restrictions and related requirements of the FAR and the DFARS. We also interviewed and obtained written responses to questions on specialty metals planning from six prime contractors responsible for five major DOD weapon systems programs,. The programs we selected were: the F- 35 Joint Strike Fighter (Joint Strike Fighter), KC-46 Tanker, DDG-51 Destroyer, Virginia Class Submarine, and Family of Medium Tactical Vehicles, to identify how these programs plan for specialty metals needs and ensure contract requirements are met. We selected a non- generalizable sample consisting of these five DOD weapon system programs, based on their greatest total acquisition cost estimates as of November 2013. Our selection process also ensured that at least one program was selected from among each of the military services and from the following four acquisition lifecycle phases: (1) between development start and production start (e.g., the U.S. Air Force’s KC-46 Tanker); (2) between production start and initial capability (e.g., DOD’s Joint Strike Fighter); (3) in production and passed initial capability (e.g., the U.S. Navy’s DDG-51 Destroyer and Virginia Class Submarine); and (4) nearing the end of production (e.g., the U.S. Army’s Family of Medium Tactical Vehicles). We interviewed and obtained written responses to questions and related contractor documents from these prime contractors regarding specialty metals availability, planning activities on these programs, and the methods they use to ensure quality assurance and oversee subcontractors and suppliers. Our findings from these five programs cannot be generalized to all programs, but they provide useful insights into how DOD officials and contractors work to address requirements associated with specialty metal restrictions. In reviewing these programs, we also obtained answers to written questions regarding specialty metals planning and quality assurance from DOD quality assurance staff overseeing these programs, including the Defense Contract Management Agency (DCMA) and the Navy. We reviewed the results of DOD’s purchasing system reviews for the six prime contractors in our review to determine whether specialty metals requirements were flowed down to subcontractors consistent with DFARS. We also reviewed written responses to questions regarding how these agencies oversee the prime contractors to help ensure compliance with their government contracts, including specialty metals domestic source restrictions. Further, we reviewed the contracts to determine what requirements related to specialty metals planning and quality assurance were included, as well as the exceptions to domestic source restrictions applicable to some of these programs, such as the KC-46 Tanker’s commercial derivative military article exception. We obtained updated information regarding these methods from the prime contractors included in our review. We reviewed and analyzed the six national security waivers that the USD AT&L approved from fiscal years 2009 through 2014. These six national security waivers included the B-1 Bomber Reliability and Maintainability Improvement Program, F-16 Block 52 Aircraft, Ground/Air Task Oriented Radar, Joint Strike Fighter radar and target assemblies parts for positioning external doors, nose and main landing gear, and Standard Missile-3 Block II-A All Up Rounds. We also assessed whether these waivers contained required elements, including whether Congress was provided advance notifications of the national security waiver determinations before they were executed as required in accordance with the statue and regulation. In addition, we obtained documents and interviewed officials from DOD’s Defense Procurement and Acquisition Policy office to review their process for granting national security waivers, including how they make determinations to grant the six waivers for national security and the extent to which DOD disseminates this information throughout the department. We assessed the extent to which DOD disseminates information on national security waivers it has granted consistent with criteria in Standards for Internal Control in the Federal Government. We reviewed prime contractor and subcontractor corrective action plans that were submitted to DCMA regarding noncompliant specialty metals and also examined the congressional notification letters relating to national security waivers that DOD provided to congressional defense committees. We met with officials at the C-130 program office at Wright-Patterson Air Force Base, in Dayton, Ohio to discuss possible plans to request a national security waiver and to discuss their experience in requesting a national security waiver. We did not assess whether other exceptions to the specialty metals restrictions were available under the facts and circumstances present for these waivers we reviewed. We conducted this performance audit from December 2013 to October 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. The specialty metals domestic source restrictions (10 U.S.C. § 2533b), summarized below, provide exceptions for: Waiver for national security interests; Compliant specialty metals are not available in satisfactory quality and sufficient quantity, in the required form, and cannot be procured when needed; Acquisitions made outside of the United States in support of combat The use of other than competitive procedures, in accordance with the Competition in Contracting Act (10 U.S.C. 2304(c)(2) for circumstances of unusual and compelling urgency of need; Compliance with agreements with foreign governments; Commissaries, exchanges, and other non-appropriated fund Small purchases (below the simplified acquisition threshold); Acquisition of some commercial items; Acquisition of certain commercial-off-the-shelf items; Acquisition of components if there is less than 2 percent of noncompliant metal (called the “de minimis” exception); Acquisition of certain commercially derivative defense articles; and Acquisition of certain noncompliant materials if the Secretary of Defense certifies in writing that acceptance of such materials is required for reasons of national security, including certain conditions and requirements. In addition, the specialty metals clause (DFARS 225.252.7009) requires that specialty metals procured for Department of Defense articles must be melted or produced in the United States, its outlying areas, or a qualifying country. There are 23 qualifying countries, as shown in the table below. Marie A. Mak, (202) 512-4841, or makm@gao.gov. In addition to the contact named above, Lisa Gardner, Assistant Director; Keith Hudson; Jean McSween; Sean Seales; Robert Swierczek; Marie Ahearn; Kenneth Patton; and Hai Tran made key contributions to this report.
Specialty metals—such as titanium, certain steel alloys, and samarium-cobalt alloy magnets—are essential to DOD weapon systems due to their unique properties, such as being highly durable. Federal statute requires specialty metals used in weapon systems to be procured from domestic sources or qualifying countries. However, the law allows DOD to waive this requirement in the interest of national security. GAO was mandated by a House report accompanying a bill for the National Defense Authorization Act (NDAA) for Fiscal Year 2014 to review DOD's compliance with specialty metals requirements. This report assesses (1) how DOD meets its needs for specialty metals parts and ensures compliance with restrictions, and (2) DOD's process for providing national security waivers for specialty metal procurements and the extent to which it disseminates waiver information throughout the department. GAO reviewed contracts, laws, regulations and DOD guidance, and analyzed a non-generalizable sample of five weapon systems as case studies based on their total 2013 acquisition costs, among other things. GAO also reviewed national security waivers DOD granted since 2009 and interviewed DOD and contractor officials. The Department of Defense (DOD) typically relies on its prime contractors to plan for the procurement of specialty metals and ensure compliance with specialty metals' restrictions for the five weapon systems programs that GAO reviewed. For these programs, GAO found that DOD plays a limited role—primarily monitoring the availability of specialty metals and conducting periodic reviews of prime contractor quality assurance processes. GAO also reviewed contracts for these five programs and found they contained clauses that require prime contractors to procure specialty metals in compliance with domestic source restrictions, ensure that delivered items meet contract requirements as part of quality assurance, and maintain processes for future material needs. In turn, these prime contractors told GAO that they pass down the contract requirements—including those pertaining to specialty metals—to their subcontractors and defense suppliers, and require them to follow industry standards for quality management. These standards include, among other things, testing subcontractor processes to determine if they meet contractual specifications; reviewing required supplier certifications for items delivered under the contract to confirm compliance with all identified requirements; and rating subcontractors using performance metrics. Prime contractors for these programs also told GAO they use a risk-based approach to oversee subcontractors, including those suppliers at lower tiers. DOD recently improved its national security waiver process; but its dissemination of information contained in those waivers is limited. Since 2009—when specialty metals restrictions were changed and the exception for national security was added—DOD has granted six national security waivers to five different weapon system programs known to have procured noncompliant specialty metals. Five of the six waivers were for samarium-cobalt magnets, which were noncompliant largely due to a change in a previously allowed exception for these magnets. During its review, GAO identified weaknesses in DOD's waiver process such as not having defined procedures for requesting waivers; and in June 2014, DOD developed written guidance for program offices to follow when requesting these waivers. However, GAO also found that DOD does not have a mechanism to share information on national security waivers granted for noncompliant specialty metals. Standards for Internal Control in the Federal Government call for information to be recorded and communicated to management and others, including external parties who need it, such as program offices and suppliers, to help the agency achieve its goals. Disseminating non-sensitive information—including the names of programs that received waivers, sources of the noncompliant specialty metals, and corrective actions—to key stakeholders, such as DOD weapon system program offices and their defense suppliers, could help raise awareness of and compliance with the specialty metals restrictions. Moreover, greater awareness of supplier-base problems and broader dissemination of national security waiver information could assist DOD in better discovering potential vulnerabilities, such as systemic supply chain risks that could impact national security objectives. GAO recommends that DOD disseminate non-sensitive information within the department and its supplier-base on the waivers it has granted for specialty metals. DOD concurred with the recommendation and plans to publish non-sensitive information.
From 1991 through 2002, smoking rates among youth fluctuated and reached their highest points around 1997. The estimated rate of current smoking among youth (defined as smoking one or more cigarettes during the previous 30 days) varied according to grade level (see fig.1). For example, the rate among 8th grade students peaked at about 21 percent in 1996 before declining to about 11 percent in 2002. For 10th grade students, the smoking rate peaked at 30 percent in 1996 before declining to 18 percent in 2002. Similarly, smoking among 12th grade students peaked at about 37 percent in 1997, before declining to about 27 percent in 2002. HHS serves as the lead federal department for addressing the nation’s public health issues, including tobacco use. HHS is responsible for informing the public of the dangers of tobacco use and coordinating federal efforts to address tobacco use issues. Within HHS, CDC’s Office on Smoking and Health has been delegated the lead for all policy and programmatic issues related to the prevention and reduction of tobacco use and has primary responsibility within the federal government for tobacco use prevention efforts. Also within HHS, the Surgeon General serves as the nation’s spokesperson on matters of public health and reports on issues such as the health effects of tobacco use. Other HHS agencies, such as SAMHSA, NIH, and HRSA, support efforts to prevent and reduce tobacco use. Education, DOD, and DOJ also support programs and activities that aim to address tobacco use among youth. Several studies have highlighted the importance of addressing tobacco use among youth. In 1994, the Surgeon General released a report that focused on the use of tobacco among youth. The report highlighted several factors that increase the likelihood that youth will begin using tobacco. These factors include engaging in other unhealthy behaviors, like substance abuse and violence; peer pressure to smoke; and cigarette advertising and promotion. In addition, the Surgeon General, CDC, NIH, and the Institute of Medicine have reported on approaches that can help prevent youth from starting to use tobacco and help existing users quit. For instance, they have reported on the demonstrated benefits of interventions such as implementing counter-marketing campaigns, using school-based educational programs in combination with providing youth with alternatives to the illicit use of tobacco, deglamorizing tobacco use, and restricting minors’ access to tobacco. According to the Surgeon General’s 1994 report, strategies for preventing and reducing tobacco use among youth should be multifaceted and involve collaborations among those that can influence the behavior and attitudes of youth, such as family members and educators. HHS led federal, state, and local agencies and nongovernmental organizations in developing a 10-year national plan, the Healthy People 2010 initiative, that includes goals for addressing tobacco use. The Healthy People 2010 initiative has identified tobacco use as one of 10 leading health indicators for the nation. Healthy People 2010 objectives related to tobacco use among youth include, among other objectives, reducing the percentage of adolescents who smoked cigarettes in the past month and increasing the percentage of adolescents who try to quit smoking. Two HHS agencies, CDC and SAMHSA, administer programs that focus only on tobacco use. CDC’s NTCP targets youth within a broader mission of preventing and reducing tobacco use among the general population. SAMHSA oversees implementation of the Synar Amendment that requires states to enact and enforce tobacco control laws prohibiting the sale of tobacco products to minors. Other programs and activities administered by HHS, DOD, DOJ, Education, and the Office of National Drug Control Policy (ONDCP) address tobacco use as part of a broader focus on unhealthy behaviors, such as substance abuse and violence. (See app. II for examples of federal programs that can address tobacco prevention and reduction among youth.) We identified two federal programs that focus only on tobacco use. The first, CDC’s NTCP, focuses on preventing and reducing tobacco use among the general population, but it also explicitly targets tobacco use among youth. NTCP provides funds through cooperative agreements to all states. In fiscal year 2002, NTCP provided about $58 million to states to address NTCP’s four goals. NTCP’s four goals are to (1) prevent youth from starting to smoke, (2) help youth and adults quit smoking, (3) minimize the public’s exposure to secondhand smoke, and (4) identify and mitigate the factors that make some populations more likely to use tobacco than others. NTCP cooperative agreements specify the terms under which federal funds are provided to the states. Under NTCP, CDC encourages states to use multiple types of interventions in their efforts to prevent and reduce tobacco use. CDC has developed guidance intended to assist states in designing, implementing, and evaluating their individual tobacco control programs. For instance, CDC recommends that states establish comprehensive tobacco control programs that include certain components, such as community-based programs to reduce tobacco use that include a wide range of prevention activities, such as engaging youth in developing and implementing tobacco control interventions, conducting educational programs for young people, parents, school personnel, and others, and restricting access to tobacco products; school programs to implement school health policies that consist of tobacco-free policies, evidence-based curricula, teacher training, parental involvement, cessation services, and links between school and other community efforts and state media and educational campaigns; marketing campaigns to counter protobacco influences and increase prohealth messages and influences, including paid television, radio, billboard, and print media campaigns; cessation services to help people quit smoking; enforcement of tobacco control policies by restricting minors’ access to tobacco and restricting smoking in public places; and statewide efforts to provide localities with technical assistance on how to evaluate tobacco programs, promote media advocacy, implement smoke- free policies, and reduce minors’ access to tobacco. CDC officials told us that CDC also provides training and technical assistance to states in designing, implementing, and evaluating their tobacco control programs. For example, in fiscal year 2000, CDC conducted three regional workshops for state health departments and education agencies aimed at helping such agencies develop coordinated plans to prevent youth from starting to use tobacco. According to CDC, representatives from 33 states participated in these workshops. The second federal program that focuses only on tobacco and aims to prevent tobacco use among youth is SAMHSA’s program to oversee state implementation of legislation commonly referred to as the Synar Amendment. This program is the only one we identified that focuses solely on tobacco use among youth. The Synar Amendment and its implementing regulation require states to enact and enforce laws that prohibit the sale of tobacco products to minors, conduct random inspections of tobacco retail or distribution outlets and estimate the percentage of retailers that illegally sell tobacco to minors, and report the results of their efforts to the Secretary of HHS. States are also required to report enforcement actions taken against those who violate state laws in order to receive certain federal grants. By the end of fiscal year 2003, states may have no more than 20 percent of retail tobacco outlets in violation of state laws that prohibit the sale of tobacco products to minors. To oversee states’ efforts to accomplish this, SAMHSA and the states negotiated interim annual target rates that states should meet. States may use a portion of their Substance Abuse Prevention and Treatment block grant to help fund the design and implementation of their inspection programs. For fiscal year 2002, the states reported that they planned to expend more than $5.4 million in block grant funds on Synar-related activities. Other federal programs aim to address tobacco use among youth as part of a broader focus on unhealthy behaviors. For example, CDC’s Coordinated School Health program provides grants to states to implement school health programs to prevent a range of unhealthy behaviors or conditions, such as drug, alcohol, and tobacco use; physical inactivity; poor nutrition, and obesity. In fiscal year 2002, CDC awarded grants to 22 states, with each state receiving approximately $400,000. CDC helps state education and health departments identify and implement health education curricula to provide youth with information and the decision-making, communication, and peer-resistance skills needed to avoid unhealthy behaviors. In addition, CDC provides guidance to state and local health education agencies on tobacco prevention programs in schools that covers policies, programs, and a tobacco-free environment. CDC periodically surveys the states, school districts, and schools on the health curricula they offer and on school health policies relating to tobacco prevention and reduction efforts. According to CDC, the information obtained through these survey efforts is used to assess trends in school health education programs. Education’s Safe and Drug-Free Schools and Communities program aims to prevent violence and drug, alcohol, and tobacco use in the nation’s schools. Under this program in fiscal year 2002, Education awarded more than $472 million in grants to state education departments and governors’ offices. Similarly, the Safe Schools/Healthy Students program, which is funded by Education, HHS, and DOJ, provides local education agencies with grants that support a variety of services designed to promote healthy childhood development and prevent substance abuse (which can include the use of tobacco) and violence. These services target preschoolers, school-aged children, and adolescents. The Safe Schools/Healthy Students program’s activities totaled about $172 million for fiscal year 2002. DOJ and DOD support drug prevention programs that also aim to prevent tobacco use among youth. For example, the Drug-Free Communities Support program, which is administered by ONDCP and DOJ, is designed to support the efforts of community coalitions that aim to prevent and reduce young people’s use of drugs, alcohol, and tobacco. These coalitions consist of youth, parents, health care professionals, educators, law enforcement officials, and other community partners. In fiscal year 2002, DOJ awarded about $46 million to community coalitions located in 50 states. Approximately $7 million was given in new awards to 70 community coalitions, and $39 million was given in renewed funding to 462 existing community coalitions. Another program, the Drug Education for Youth program (DEFY), which is sponsored by DOD and DOJ, targets youth aged 9 to 12 to improve awareness of the harmful effects of alcohol and other drugs, including tobacco. According to agency officials, the program aims to promote positive self-images and lifestyles. In fiscal year 2002, DOD funding totaled over $1 million for 55 local DEFY programs. DOJ provided approximately $850,000 in funding to implement 111 local DEFY programs. In addition to supporting programs that aim to address tobacco use among youth, HHS agencies conduct research on tobacco use and its health effects. NIH’s National Cancer Institute (NCI) has identified tobacco use among youth as one of its research priorities. In fiscal year 2002, NCI funded more than 40 grants, totaling almost $30 million, for research on ways to understand, prevent, reduce, and treat tobacco use among youth. Similarly, NIH’s National Institute on Drug Abuse (NIDA) supports research on effective tobacco use prevention and reduction interventions for youth. For example, NIDA established and funds a teen tobacco addiction treatment research center to examine methods of eliminating dependence on nicotine and assess the effectiveness of these strategies. The center is assessing the safe use and effectiveness of nicotine patches and gum for adolescents. According to NIDA, in fiscal year 2002, funding for its research projects that focused on substance abuse, including tobacco use among youth, totaled about $124 million. In fiscal year 1999, NCI and NIDA jointly established seven Transdisciplinary Tobacco Use Research Centers (TTURCs) at academic institutions in an effort to identify effective ways to prevent and reduce tobacco use. According to HHS officials, additional information on ways to reduce tobacco use among youth is needed because of the limited knowledge available about cessation interventions that work best for young people. The 5-year TTURCs research effort is designed to study new ways of preventing tobacco use and nicotine addiction. According to HHS officials, in fiscal year 2002, NCI and NIDA provided over $15 million to TTURCs, which included funding for research on youth and adolescent tobacco use and nicotine addiction at four of the seven centers. These four centers are conducting studies on adolescent smoking. According to NCI, one study found that students with high academic performance, perceived academic competence, and involvement in school-related clubs and sports teams were less likely to smoke. CDC also supports research on health promotion and disease prevention including research on tobacco use among youth, through its network of 28 research centers that are affiliated with schools of public health, medicine, or osteopathy located throughout the country. According to CDC officials, these research centers focus on identifying effective prevention strategies that can be applied at the community level. One center is examining factors that can influence youth and young adults to start using tobacco and two other centers are conducting research that examines youth cessation programs, according to CDC. HRSA is working with certain federally supported community health centers on a multiyear initiative to address health disparities among youth. HRSA officials said that the effort would involve developing interventions to address the needs of high-risk medical subpopulations, such as young people with asthma or cardiovascular conditions for whom tobacco use can pose especially high risks. In addition to research, HHS and other federal departments conduct a variety of tobacco-focused activities that aim to prevent and reduce tobacco use among youth. For example, officials from HHS, Education, and other federal departments, along with experts from national organizations and professional associations, developed guidance to help schools identify and implement strategies for preventing tobacco use among youth. For example, the guidelines recommend that schools develop and enforce a school policy on tobacco use, provide tobacco-use prevention education from kindergarten through 12th grade, provide instructions about the short- and long-term consequences of tobacco use, and provide training for teachers. Similarly, in 1997, SAMHSA issued guidance that describes strategies that communities can use to prevent and reduce tobacco use among youth. In other activities, HHS agencies develop and promote educational materials to prevent and reduce a range of unhealthy behaviors among adolescents, including tobacco use. For example, Girl Power!, a national public education campaign, is designed to prevent 9- to 13-year-old girls from using tobacco, alcohol, and illegal drugs and includes a Web site that offers articles, games, and quizzes that teach girls about the dangers of tobacco use. Similarly, CDC’s Tobacco Information and Prevention Source Web site offers a variety of educational materials for youth, such as tips on how to quit using tobacco and information on the health consequences of using tobacco. CDC also disseminates information for parents, such as a kit that offers advice on ways to increase parental involvement in their children’s lives and incorporate tobacco prevention messages into daily activities. In addition, DOD sponsors Web sites that include information on preventing and reducing tobacco use among youth and supports various youth activities that address unhealthy behaviors, including tobacco use. For example, one project identified was Smart Moves, which aims to prevent tobacco, alcohol, and drug use by bolstering youths’ self-esteem and their resistance to unhealthy behaviors. HHS agencies also support activities that use various media, such as print, radio, television, and videotapes, to counteract the impact of tobacco product marketing. For example, CDC supports a variety of entertainment- related outreach activities that enlist celebrities as spokespersons to deliver antismoking messages and to increase prohealth messages in entertainment programming. CDC also supports the Media Campaign Resource Center, a clearinghouse offering antitobacco media products developed for television, radio, print, and outdoor advertising. In addition, CDC and SAMHSA developed Media Sharp, a media literacy guide for educators and community leaders who work with middle school and high school age youth to dissuade youth from using tobacco. To monitor federal programs that aim to prevent and reduce tobacco use among youth, federal departments and agencies collect information on how their programs are being implemented by grantees and the effectiveness of grantees’ efforts in meeting national program goals. Federal departments and agencies obtain this information from various sources, such as grantee applications for federal funding, progress reports, site visits, and program evaluations. According to federal officials, the information is used to assist grantees in managing and evaluating their programs. To monitor the NTCP, CDC collects information on the design, implementation, and effectiveness of state tobacco control programs. CDC obtains this information through various sources, such as states’ applications for NTCP funding, state progress reports, periodic site visits, surveys, and program evaluations conducted by various states. For instance, the applications that states submit when applying for NTCP funding must include strategic plans that provide information on the design and implementation of their tobacco control programs. The plans must also include information on how states will achieve NTCP’s goals. According to CDC officials, other important sources of information are the biannual reports that the agency requires states to submit on the progress of their tobacco control programs. These reports provide CDC with additional information, such as enforcement strategies used to prevent the sale of tobacco products to minors, information campaigns to increase the public’s awareness of the health consequences of using tobacco, and efforts to promote tobacco-free schools and positive role models for youth. CDC also obtains information on state tobacco control programs through other sources. For example, CDC officials said that NTCP project officers, who are responsible for monitoring state tobacco control programs, visit each of their assigned states approximately every 12 to 18 months. CDC officials said that through these visits they obtain more in-depth information about the design and implementation of the states’ programs, and they gain a better understanding of the challenges that states may face in achieving NTCP’s goals. In addition, these officials said that they monitor the effects of state tobacco control programs through periodic national and state youth tobacco surveys. Through these surveys, CDC obtains information on changes in tobacco use among youth and their knowledge, attitudes, and behaviors towards tobacco use. CDC officials said that they work with the states to design the state surveys and to help states interpret and use the survey data. CDC officials also said that they have obtained useful information from evaluations that several states completed on the effectiveness of their tobacco control programs. According to CDC officials, the information they obtain has been used in various ways. For example, in developing its best practice guidance for comprehensive tobacco control programs, CDC used information from analyses of tobacco control programs in California and Massachusetts and CDC officials’ experience in providing technical assistance in other states. CDC officials also said that the agency has provided a variety of training and technical assistance to help states, among other things, adopt evidence-based interventions for preventing tobacco use. In addition, CDC developed guidance in 2001 on how states could evaluate their individual tobacco control programs. The guidance includes information on approaches for designing evaluations; measuring outcomes of specific program components; and analyzing, interpreting, and using evaluation results to improve operations and enhance the impact of tobacco control programs. In fiscal year 2003, CDC took action to collect additional information on the design, implementation, and effectiveness of state tobacco control programs. For instance, CDC now requires that states submit additional information in their biannual reports. These officials said that the expanded NTCP data collection effort should enable CDC to obtain a more comprehensive picture of state tobacco control programs and the extent to which program activities are consistent with NTCP’s goals. CDC officials said that they anticipate that these changes, along with the redesign of the NTCP information system, will facilitate more comprehensive comparisons within and across states and regions on progress towards reducing tobacco use. The changes should also enable CDC to better identify state-specific or systemic issues, according to these officials. In fiscal year 2003, CDC began requiring that each state dedicate staff to evaluate the state’s tobacco control program. Each state was required to submit detailed information with its NTCP funding application that described how it intended to evaluate the program’s effectiveness. The application had to include information on the specific performance indicators the state intends to use and its methodologies for collecting and analyzing data, projected time lines for completing evaluation efforts, and plans for using evaluation results to improve its program. CDC officials told us that they recognize that conducting program evaluations can present financial and methodological challenges for state tobacco control programs, but that CDC had instituted this requirement because evidence on the impact of individual state programs has been generally limited. These officials noted that while evaluations have been completed by eight states, the results of these evaluations and other studies provide only a limited picture of the impact of all states’ programs in achieving NTCP’s goals. To monitor state compliance with the requirements of the Synar Amendment and its implementing regulation, SAMHSA collects data on the design and implementation of state compliance efforts. The regulation requires that each state report to SAMHSA information on the state’s efforts to inspect retail tobacco outlets, including the state’s sampling methodology, inspection protocol, and inspection results. SAMSHA reviews the information to determine whether states have complied with requirements for enforcing state laws and conducting random inspections of retail tobacco outlets. In reviewing these data, SAMHSA determines whether a state’s estimated retailer violation rate meets negotiated annual targets and shows progress toward the 20 percent goal. Based on the latest data available at the time of our review, 49 states met their negotiated retailer violation rate targets for 2002. Federal agencies with programs that address tobacco use, along with other unhealthy behaviors among youth, obtain information on grantees’ efforts to design and implement their programs. They obtain this information by various means, such as periodic reports and visits to grantee sites. For example, DOJ requires community antidrug coalitions that participate in the Drug-Free Communities Support program to submit annual progress reports on their programs. As part of this reporting requirement, coalitions must report on certain measures of youth behavior, such as the age youth first started to use tobacco, the frequency of tobacco use in the past 30 days, and youths’ perceptions of tobacco- related risks. According to DOJ officials, the information obtained from reports and site visits is used to provide grantees with training and technical assistance. DOJ is also overseeing a 5-year evaluation of the effectiveness of this federal grant program. The evaluation, which is scheduled for completion in 2004, is designed to take into consideration both the similarities and differences among the coalitions and their communities and aims to assess the effectiveness of the coalitions’ efforts to reduce the use of tobacco, alcohol, and illicit drugs among youth. Similarly, to monitor their programs, DOJ and DOD contracted for evaluations of the effectiveness of some DEFY components. For instance, one study examined the effectiveness of the summer camp component in 1997 at 18 DOJ DEFY camps and 28 military DEFY camps. The study included the use of pre- and postcamp questionnaires to assess youths’ attitudes towards smoking cigarettes and to determine how often they smoked. HHS and other federal departments coordinate their efforts to prevent, treat, and reduce tobacco use among youth by participating on various committees and work groups and by collaborating on various programs, research projects, and activities. Although HHS has the lead responsibility for coordinating these efforts, some HHS officials stated that coordination among HHS agencies presents challenges. HHS leads efforts among its agencies and others to develop strategies for addressing tobacco use among youth in support of the Healthy People initiative, which includes objectives to reduce tobacco use among youth. As part of this initiative, representatives from various federal departments and nongovernmental organizations participate in work groups that focus on tobacco use objectives. For example, the Healthy People 2010 Tobacco Use Work Group, chaired by CDC, includes representatives from other HHS agencies as well as the Environmental Protection Agency (EPA), the Federal Trade Commission, and nonfederal organizations. The work group meets periodically to discuss strategies and challenges in addressing issues related to tobacco use. HHS also plays a leadership role in the Youth Tobacco Cessation Collaborative. Established in 1998, the collaborative brings together CDC, NCI, NICHD, NIDA, the National Heart, Lung, and Blood Institute (NHLBI), and several nonfederal organizations to help ensure young tobacco users’ access to cessation interventions. In 2000, the collaborative published an action plan to facilitate planning and priority-setting on the need for tobacco cessation for youth. In addition, three members of the collaborative—CDC, NCI, and the Robert Wood Johnson Foundation—are working together on the Helping Young Smokers Quit initiative, a 4-year project that aims to identify, characterize, and evaluate the effectiveness of various youth cessation programs. Other work groups focus on broader adolescent health issues that include tobacco use among youth. For example, both the Healthy People 2010 Adolescent Health Work Group, cochaired by CDC and HRSA, and the related National Initiative to Improve Adolescent Health by 2010 aim to foster greater involvement by various professions to improve the overall health of adolescents, in part by reducing their use of tobacco. According to HRSA officials, members of the national initiative are trying to educate health care and other professionals on the importance of screening for tobacco use and other unhealthy behaviors during routine health care visits, providing counseling on the benefits of quitting tobacco use, and providing referrals for youth, their parents, and other family members to tobacco cessation services. As part of the national initiative, CDC, HRSA, and the American Academy of Pediatrics are collaborating on the development of a prevention guide to help pediatricians address unhealthy behaviors among youth, including tobacco use. In 1984, the Congress passed legislation requiring, among other things, that HHS establish an interagency committee to coordinate the department’s research, educational programs, and other smoking and health efforts with similar efforts of other federal departments and nonfederal organizations. As a result, in 1985, HHS established the Interagency Committee on Smoking and Health. According to CDC officials, the committee brings together representatives of federal agencies and nonfederal organizations involved in tobacco use issues and serves as a forum for committee members and the public to share information and discuss a variety of tobacco-related issues and efforts. Committee meetings that have specifically focused on tobacco use among youth have covered such topics as the health effects of smoking on young people, the sale of cigarettes to minors, and strategies for preventing tobacco use. Federal departments also collaborate on efforts to prevent and reduce tobacco use among youth by jointly administering programs, conducting research, and supporting education and outreach activities. For example, Education, DOJ, and HHS jointly administer the Safe Schools/Healthy Students program. Through interagency agreements, Education handles grants management activities, HHS provides technical advice and financial assistance, and DOJ oversees program evaluation efforts. Similarly, for the Drug-Free Communities Support program, ONDCP directs the program and through an interagency agreement transfers funds to DOJ to cover grant awards, grants management, and evaluation activities. Both ONDCP and DOJ provide technical assistance to program grantees. HHS agencies also coordinate on efforts to jointly support research on tobacco use prevention and cessation. For example, in addition to the NCI- and NIDA-supported TTURCs initiative, NCI led the creation of an NIH- wide Tobacco and Nicotine Research Interest Group in January 2003. According to NCI officials, the group was established to leverage expertise and resources across NIH for tobacco research. In addition to NCI, representatives from other NIH institutes, such as NICHD, NIDA, NHLBI, and the National Institute of Dental and Craniofacial Research (NIDCR) have participated in the group. Representatives from CDC are also participating in the group’s meetings. Furthermore, HHS agencies, Education, ONDCP, and nonfederal organizations collaborate on education and outreach activities aimed at discouraging youth from starting to use tobacco and encouraging existing users to quit. For example, CDC and Education collaborated on the development and dissemination of a guide for parents on how to address their children’s health needs, including preventing and reducing tobacco use. Table 1 highlights various education and outreach activities aimed at preventing and reducing tobacco use among youth that HHS and other federal departments and agencies work on together. HHS officials said that coordinating on tobacco-related issues within HHS presents challenges. They pointed out that, although multiple HHS agencies have programs and other efforts to address the prevention and reduction of tobacco use, the missions and funding priorities of the agencies differ. For example, CDC officials told us that they had initiated discussions in fiscal year 2003 with HRSA to collaborate on offering tobacco prevention and cessation services to underserved populations that obtain health care through HRSA’s network of community health centers. However, this effort has been delayed largely due to HRSA’s competing funding priorities and limited resources. In another instance, NCI officials noted that NIDA and NIDCR decided to fund a proposal to translate research findings on alcohol, tobacco, and other drug prevention and treatment research to clinical dental practice settings. However, according to an NCI official, NCI did not learn about the proposal in time to consider it for fiscal year 2003 funding. We provided a draft of this report to HHS, DOD, DOJ, and Education for comment. DOD concurred with the report as written and DOJ did not have comments. HHS and Education provided technical comments that we incorporated as appropriate. In written comments, HHS stated that the report was very informative and provided a thorough overview of nicotine and tobacco activities related to youth, but did not include programs within CMS that are a substantial element of HHS tobacco prevention. Specifically, HHS stated that under Medicaid, states are required to cover certain smoking cessation services for children and adolescents. Including joint federal-state programs that finance health insurance such Medicaid and the State Children’s Health Insurance Program, was beyond the scope of our review. HHS also noted that the report did not include information about the challenges other federal agencies experienced in coordinating tobacco-related issues. We discussed coordination of tobacco-related issues with officials from DOD, DOJ, and Education. However, these officials did not cite any challenges they had experienced with coordinating their tobacco-related efforts. As agreed with your office, unless you release its contents earlier, we plan no further distribution of this report until 30 days after the issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Secretary of Defense, the Attorney General, the Secretary of Education, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7101. An additional contact and staff acknowledgments are provided in appendix III. To do our work, we obtained and reviewed program documents, strategic and performance plans, pertinent program reports and special studies, surveillance and other data, and federal Web sites from the Department of Health and Human Services (HHS) including the Office of the Secretary, the Office of the Assistant Secretary for Planning and Evaluation, Office of Public Health and Science, Agency for Healthcare Research and Quality, Centers for Disease Control and Prevention (CDC), Centers for Medicare & Medicaid Services, Health Resources and Services Administration (HRSA), Indian Health Service, National Institutes of Health (NIH), and Substance Abuse and Mental Health Services Administration (SAMHSA); the Departments of Defense (DOD), Justice (DOJ), and Education; the Environmental Protection Agency; the Federal Trade Commission; and the Office of National Drug Control Policy (ONDCP). We also reviewed the relevant literature and documents prepared by federal interagency committees and work groups that focused on the prevention and reduction of tobacco use among youth and adults. To identify federal programs that aim to prevent and reduce tobacco use among youth (defined as children and adolescents under age 18), we reviewed the Catalog of Federal Domestic Assistance, a database of federal grant programs. We also reviewed pertinent documents and federal Web sites. After identifying federal programs, we interviewed and collected information from federal program officials to confirm that these programs supported efforts to prevent and reduce tobacco use among youth. As a result, we focused on four federal departments: HHS and its component agencies—CDC, SAMHSA, NIH, and HRSA; Education; DOJ; and DOD. We then obtained more detailed information on the programs they fund. We interviewed officials in HHS, DOD, DOJ, and Education and obtained information on program characteristics, including the purpose, target audience, and program and financial requirements. We also obtained information on research and activities that involve federal departments and agencies, such as education and outreach efforts intended to prevent the initiation of tobacco use among youth and help youth quit tobacco use. In conducting this work, we also reviewed strategic and annual performance plans, along with budgetary and other pertinent documents, including national action plans and tobacco use prevention and cessation guidance. Where available, we obtained fiscal year 2002 funding information on the federal programs and research that we identified. However, we were unable to determine the extent of spending by federal agencies on efforts to prevent and reduce tobacco use among youth because, in many instances, funding information covers more than the prevention and reduction of tobacco use among youth. The programs, research, and activities that we discuss in this report do not represent an exhaustive list of all federal efforts to prevent and reduce tobacco use among youth, but highlight a range of such efforts. To determine how federal departments and agencies monitor programs that aim to prevent and reduce tobacco use among youth and the types of monitoring information that departments and agencies collect, we obtained and reviewed descriptive information on federal departments and agencies’ monitoring efforts. Specifically, we reviewed strategic plans, annual performance plans and reports, performance monitoring reports, program evaluation guidance, and copies of federal and state program evaluation reports. We also interviewed program officials to obtain a more detailed understanding of their monitoring efforts. To determine how federal departments and agencies coordinate their efforts to address youth tobacco use, we focused our attention on identifying the key coordination mechanisms and the results of such coordination. Specifically, we reviewed strategic and annual performance plans and reports, interagency agreements, memorandums of understanding, minutes of interagency meetings, and other pertinent documents. We also interviewed federal program officials and obtained information from these officials describing the characteristics of various federal efforts, including information on purpose, federal agencies involved, and the target audience. We also obtained their perspectives on any factors presenting coordination challenges related to addressing youth tobacco use. We conducted our work from January 2003 through October 2003 in accordance with generally accepted government auditing standards. Our findings are limited to the select examples identified and thus do not necessarily reflect the full scope of federal programs and other activities related to preventing and reducing tobacco use among youth. We did not assess the effectiveness of federal programs, monitoring efforts, or coordination activities. Table 2 lists selected federal grant programs that may be used to address tobacco use among youth. The list includes programs from four departments. In addition to the person named above, contributors to this report were Alice London, Donna Bulvin, Krister Friday, and Lawrence Solomon.
Tobacco use is the leading cause of preventable death in the United States. The Centers for Disease Control and Prevention (CDC) reported that, on average, over 440,000 deaths and $76 billion in medical expenditures were attributable to cigarette smoking each year from 1995 through 1999. Reducing tobacco-related deaths and the incidence of disease, along with the associated costs, represents a significant public health challenge for the federal government. Most adults who use tobacco started using it between the ages of 10 and 18. According to a Surgeon General's report, if children and adolescents can be prevented from using tobacco products before they become adults, they are likely to remain tobacco-free for the rest of their lives. GAO was asked to provide information on federal efforts to prevent and reduce youth smoking. Specifically, this report describes (1) federal programs, research, and activities that aim to prevent and reduce tobacco use among youth, (2) the efforts of federal departments and agencies to monitor their programs, and (3) the coordination among federal departments and agencies in efforts to prevent and reduce tobacco use among youth. Some federal programs, research, and activities that aim to address tobacco use among youth focus only on tobacco while others aim to address tobacco use as part of broader efforts to address unhealthy behaviors such as substance abuse and violence. Two federal programs within the Department of Health and Human Services (HHS) focus only on tobacco use. CDC's National Tobacco Control Program (NTCP) focuses on preventing and reducing tobacco use among the general population and explicitly targets youth. The Substance Abuse and Mental Health Services Administration's program to oversee implementation of a provision of federal law, commonly referred as the Synar Amendment, focuses only on tobacco use among youth. The Synar Amendment requires states to enact and enforce laws prohibiting the sale of tobacco products to minors. In addition to these tobacco-focused programs, HHS, and the Departments of Defense (DOD), Justice (DOJ), and Education sponsor programs that include tobacco use as part of broader efforts to address unhealthy behaviors among youth, such as substance abuse and violence. For example, Education's Safe and Drug-Free Schools and Communities program is designed to prevent substance abuse and violence. HHS agencies, such as the National Institutes of Health, conduct research on tobacco use and nicotine addiction among youth and its health effects on youth. HHS agencies and other federal departments also support activities to prevent and reduce tobacco use among youth, such as education and outreach efforts. HHS and its component agencies coordinate tobacco-related efforts with other federal, state, and local government agencies and nongovernmental entities. Federal departments and agencies collect a variety of information to monitor how programs that aim to address tobacco use among youth are being implemented by grantees and the effectiveness of grantee efforts in meeting program goals. The information is collected through various means, including grant applications, progress reports, periodic site visits, and program evaluations. For example, to monitor NTCP, CDC requires states to submit biannual reports on the implementation of state NTCP-supported tobacco control programs. The information that federal departments and agencies collect on these programs is also used to provide training and technical assistance to grantees on topics such as conducting program evaluation. In commenting on a draft of this report, HHS stated that the report was very informative but it did not include programs like Medicaid that are a substantial element of HHS tobacco prevention efforts. Including programs that finance health insurance such as Medicaid, however, was beyond the scope of our review. Also, HHS noted that we did not include information about the challenges other federal agencies face in coordinating tobaccorelated issues but DOD, DOJ, and Education did not describe such challenges. DOD and DOJ had no comments on the report and HHS and Education provided technical comments that we incorporated as appropriate.
IRS’ mission is to provide taxpayers with top-quality service by helping them to understand and meet their tax responsibilities and by applying the tax law with integrity and fairness. In fiscal year 2000, IRS collected over $2 trillion in tax revenue, issued about $194 billion in tax refunds, and had net taxes receivable at year-end of $22 billion. Although most of the revenue was collected by intermediaries such as financial depository institutions and transferred directly to the Department of the Treasury’s general fund, IRS offices and lockbox banks collected $435 billion in fiscal year 2000. IRS has 10 campuses nationwide that have collection, refund, and enforcement responsibilities. IRS also has other field offices to assist taxpayers and perform collection and enforcement activities. Ten commercial lockbox banks also receive and process taxpayer receipts, then forward the data to IRS for input and processing. In response to congressional concerns as embodied in the Internal Revenue Service Restructuring and Reform Act of 1998, IRS instituted a reorganization that has significantly affected the roles and responsibilities of its offices. Fiscal year 2000 marked the first time IRS was able to produce combined financial statements covering its tax custodial and administrative activities that were fairly stated in all material aspects. This achievement required extraordinary human effort and extensive reliance on compensating processes to work around IRS’ serious system and control weaknesses to derive reliable year-end balances for its financial statements. However, this approach does not fix its fundamental weaknesses nor produce the reliable, useful, and timely financial and performance information IRS needs for ongoing decision-making consistent with the CFO Act of 1990. The objectives of this report are to (1) provide a status of previously reported internal control and compliance issues and related recommendations and (2) present new issues identified during our audit of IRS’ fiscal year 2000 financial statements along with new recommendations. Appendix I provides further details on our scope and methodology. We performed our work from April 2000 through February 2001 in accordance with U.S. generally accepted government auditing standards. During fiscal year 2000, IRS continued to have serious internal control deficiencies that affected its reporting and management of unpaid assessments. IRS’ lack of an appropriate general ledger system prevented it from properly and routinely classifying unpaid assessments without substantial use of specialized computer programs and manual intervention. Additionally, significant delays and errors in recording taxpayer payments and other information adversely affected the accuracy of taxpayer accounts and thus IRS’ ability to ensure taxpayers were not unduly harmed or burdened. Also, the lack of valid and timely cost-benefit data hindered IRS’ ability to make or justify resource allocation decisions that directly affect the management of unpaid assessments and, thus, the collection of federal revenue. Collectively, these issues are indications of serious internal control deficiencies and constitute a material weakness in unpaid assessments. Additionally, the continued existence of these issues could result in lost revenue to the government, erode taxpayer confidence in the equity of the tax system, and adversely affect future compliance. Table 1 summarizes the issues we identified related to unpaid assessments, their effects, and IRS’ actions to address these issues. These issues were also identified in prior years’ audits, for which recommendations have already been made. Consequently, we are not making any new recommendations related to unpaid assessments. Appendix II lists these previous recommendations and IRS’ actions to address them. In an integrated financial management system, the general ledger is supported by subsidiary ledgers, which contain detailed records of transactions and automatically update the appropriate general ledger account balances as transactions occur. Throughout the year, detailed records in the subsidiary ledger would then support key account balances in the general ledger. However, throughout fiscal year 2000, IRS continued to lack an effective subsidiary ledger system that could accumulate and track the status of unpaid assessments on an ongoing basis. This deficiency continued to necessitate the use of an extensive workaround process in order for IRS to derive the balances in the three categories of unpaid assessments as defined by federal accounting standards—taxes receivable, compliance assessments, and write-offs—for year-end financial reporting. This workaround process is costly, labor-intensive, and time-consuming. It involves the use of a specialized computer program to extract all unpaid assessment data from IRS’ master files—its only detailed database of taxpayer information—and classify them for financial reporting. However, the master files do not contain all the details necessary to properly and fully classify unpaid assessment accounts. Therefore, the workaround process also includes the need to select statistical samples of IRS’ unpaid assessments and manually review the sampled accounts to (1) determine their proper classification and (2) estimate collectibility for those assessments properly classified as taxes receivable. As in past years, this statistical sampling has resulted in the need to materially adjust the amounts generated by the computer extraction program—by tens of billions of dollars—to produce reliable amounts for taxes receivable and other unpaid assessments. In fiscal year 2000, of a total of 474 unpaid assessment sample items selected for detail testing that IRS’ computer extraction program originally classified as taxes receivable, 158 items were misclassified and were actually write-offs or partial write-offs, compliance assessments, or were deemed not to be unpaid assessments. Based on our work, we estimate that 12.6 percent of unpaid assessments originally classified by IRS’ computer extraction program as taxes receivable were misclassified. Figure 1 below illustrates the problem by showing the level of adjustments needed to the amounts generated by the computer extraction program in fiscal year 2000 to arrive at reliable amounts for each category of unpaid assessments. Although the workaround processes allowed IRS to report reliable year-end information for unpaid assessments in its financial statements, these balances are reliable only for a single point in time and are not available until several months after the end of the fiscal year. Additionally, the magnitude of the adjustments needed demonstrate that the data provided by IRS’ automated system for unpaid assessments but not supplemented by these workaround procedures and adjustments are unreliable for financial reporting. They cannot be used to track the overall status of IRS’ unpaid assessments and cannot be relied upon by IRS management and Congress to make policy and budgetary decisions. Maintaining accurate taxpayer accounts is important for properly managing activity and ensuring the fair and equitable treatment of taxpayers, and it is necessary for reliable financial reporting. However, significant delays and errors in updating taxpayers’ accounts further exacerbated problems related to the reliability of unpaid assessment balances in general and the reliability and management of individual taxpayer accounts in particular throughout the fiscal year. Errors and delays in recording activity also continued to lead to instances in which tax liens were not promptly released or not released at all during the period covered by our audit. These conditions continued to result in instances of taxpayer burden and lost opportunities to collect outstanding taxes owed. As in previous years, we found delays and errors in recording payments for unpaid payroll taxes where separate accounts are established and assessments recorded for a related tax liability. IRS' systems cannot automatically link to each other the multiple assessments made for the one tax liability. Consequently, IRS’ systems are unable to automatically reduce the balance in the related account (or accounts) if the business or an officer pays some or all of the outstanding taxes. To compensate, IRS established procedures to manually link the related accounts. However, we still found many instances in which payments were not posted to accounts that had been linked. The statistical sample of 474 unpaid tax assessment cases reviewed included 68 unpaid payroll cases involving multiple assessments. Of these 68, we found that 29 cases contained payments that IRS either had not recorded, or had failed to record in a timely manner, to all related accounts. Some of these amounts were paid by taxpayers in the late 1980s. Based on the results of our work, we estimate that 42 percent of the population of unpaid payroll tax accounts involving multiple assessments as of September 30, 2000, had this characteristic; and of these 29 cases, 28 (96 percent) had a manual code cross-referencing them to related accounts, yet the payments were still not recorded in all of the related accounts. We also found other delays and errors. For example, we found that IRS’ failure to enter or reverse status or freeze codes into the taxpayers’ accounts resulted in improper refunds being issued—in two cases, more than $4,000 each—to taxpayers who had other outstanding liabilities. In another case, IRS recorded an estate payment of $68 million to the wrong taxpayer account. Though the taxpayer’s estate was owed a refund of almost $7 million, this error was not corrected until almost 2 years later, and thus the refund was not issued for nearly 2 years after it was owed. Delays and errors in recording activity in taxpayer accounts complicate IRS’ efforts to derive a reliable balance for taxes receivable and other unpaid assessments in its financial statements. The accuracy of taxpayer accounts affects the determination of both the appropriate classification of these accounts under federal accounting standards and the basis for estimating collectibility for those accounts determined to represent taxes receivable. For example, to determine whether an unpaid payroll tax liability related to a defunct business should be classified as a write-off, IRS must first determine that no outstanding penalty assessments against officers of the business exist or have any future collection potential. If the accounts representing penalty assessments against officers continue to show outstanding balances solely because payments have not been appropriately recorded in these accounts, IRS could erroneously conclude that the unpaid tax owed by the business still has some collection potential from the officers and thus erroneously classify the account as a tax receivable. Delays in updating taxpayer accounts and taking appropriate actions also led to instances in which IRS did not release federal tax liens applied against the property of taxpayers within 30 days after the taxpayers had satisfactorily discharged their tax liabilities as required by Section 6325 of the Internal Revenue Code. In fiscal year 2000, we found that IRS continued to experience significant delays in releasing some tax liens. Specifically, in 3 of the 38 tax lien cases we reviewed in fiscal year 2000, we found that it took IRS more than 100 days, and in one case 583 days, to release the liens against the taxpayers’ properties after the taxpayers had satisfied their outstanding tax liabilities. Based on our work, we estimate that during fiscal year 2000, for 11 percent of resolved unpaid assessment cases that had tax liens, IRS did not release the liens within the 30-day requirement. The failure to promptly release liens could cause undue hardship and burden to taxpayers who may want to sell property or apply for commercial credit. As with any large agency, IRS is confronted by the ongoing management challenge of allocating its limited resources among competing priorities. However, IRS does not have the management data necessary to prepare reliable cost-benefit analyses to make more informed decisions about how best to allocate its resources. Consequently, IRS is hindered in its ability to determine whether it is devoting the appropriate level of resources to identifying and pursuing collection of unpaid taxes relative to the costs and potential benefits involved. During fiscal year 2000, we continued to find that IRS was closing delinquent tax cases without working them—that is, without making collection contact with taxpayers through either telephone calls or field visits. This type of case closure is referred to as “shelving.” The process of shelving cases began in mid 1999 in response to an increasing inventory workload and IRS’ assessment that resource constraints and decisions regarding where to deploy these resources would not permit it to actively pursue the cases. According to IRS records, as of September 30, 2000, 1.8 million cases totaling $8.6 billion—compared to 648,000 cases totaling $2.4 billion at September 30, 1999—were shelved because IRS judged that resource constraints would not allow it to actively pursue collection on these cases. We also continued to find unpaid assessment cases that had collection potential but were not being actively worked by IRS. We found at least six cases in our testing of unpaid assessments constituting taxes receivable for which information in the case files indicated some collection potential, but for which IRS had taken no collection action. In two of these cases, IRS was not actively pursuing collection of taxpayers who owed $23,000 and $88,000, respectively, in outstanding taxes and who each had annual incomes in subsequent years of at least $110,000. How IRS derives its balance for taxes receivable in its financial statements is affected by actions taken by IRS to collect outstanding taxes. In estimating collectibility for those accounts in its statistical samples that are appropriately classified as taxes receivable, IRS reviews case file information and considers whether the agency is pursuing collection through such means as levies, seizures, offers-in-compromise, or installment agreements. To the extent these files contain no evidence of such efforts, IRS must assess collectibility for the account at zero. This ultimately affects the balance of both net taxes receivable and the related allowance for doubtful accounts reported in its financial statements. IRS’ failure to pursue delinquent taxpayers with at least some ability to pay is part of a broader and continued decline in IRS’ enforcement activities and disposition of delinquent tax cases. For example, according to IRS records, between fiscal years 1998 and 2000, enforcement activities such as levy notifications experienced a substantial decline, from more than 9 percent to less than 1 percent of these unpaid assessment accounts. During the same period, the dispositions of delinquent accounts and investigations as a percentage of total outstanding cases decreased from 6.1 to 3.5 percent, a reduction of more than 43 percent. According to IRS records, collections on delinquent taxpayer accounts also decreased by 28 percent during this period, from $5.3 billion in fiscal year 1998 to $3.8 billion in fiscal year 2000. While there is a point at which it ceases to be cost effective to pursue collection, we believe that these decisions should be based on reliable cost- benefit data. Without valid cost-benefit analyses, IRS is hindered in its ability to make sound comparisons among competing priorities and to most effectively allocate resources among these priorities. One element that is critical to such a cost-benefit analysis is a measure of taxpayers’ voluntary compliance with the nation’s tax laws. However, as we have previously reported, IRS lacks such a measure. Consequently, it does not know the impact of the recent declines in enforcement activities and delinquency collections on taxpayer compliance. Congress and tax practitioners have expressed concerns that declines in pursuing potential unpaid taxes and in enforcing and collecting on delinquent accounts may increase incentives for taxpayers either to not report or to underreport their tax obligations. The lack of reliable cost-benefit information with which to make informed decisions could result in billions of dollars in outstanding amounts going uncollected and could lead to further erosion in taxpayers’ confidence in the equity of the tax system and adversely affect future compliance. During fiscal year 2000, IRS disbursed over 101 million tax refunds totaling about $194 billion. However, because of long-standing weaknesses in IRS’ controls over refund disbursements and other management challenges, the federal government continued to be exposed to material losses through the issuance of improper refunds, particularly with respect to EITC claims. Time constraints, high volume, reliance on information supplied by taxpayers, and the timing of filing of information returns by third parties create inherent limitations in IRS’ options for addressing the problem of improper refunds. Consequently, in fiscal year 2000 IRS continued to (1) issue improper refunds associated with invalid EITC claims and (2) rely extensively on post-refund (detective) controls that were not fully effective in identifying and limiting the losses associated with improper refunds. This, in turn, continued to expose the government to financial losses, possibly in the billions of dollars, through the disbursement of improper refunds. Table 2 summarizes issues we found relating to refund processing controls, their effects, and IRS’ actions to address these issues. These issues were also identified in prior years’ audits, for which recommendations have already been made. Consequently, we are not making any new recommendations related to refund processing controls. Appendix II lists the previous recommendations and IRS’ actions to address them. The options available to IRS in its efforts to ensure that only valid refunds are disbursed are currently limited. For example, while it processes hundreds of millions of tax returns each filing season, IRS must also issue refunds within certain time constraints or be subject to interest charges. At the same time, IRS must contend with the fact that third-party information, such as form 1099s, are not required to be filed prior to the start of the tax filing season. Comparison of such information with tax return data is problematic because IRS does not have time to prepare the third-party data for matching prior to the receipt of individual tax returns. Nonetheless, IRS does have some preventive controls which, if effectively implemented, could help to reduce the level of risk associated with issuing improper refunds related to EITC claims. For example, IRS’ Examination Branch is responsible for performing examinations on tax returns with potentially invalid EITC refund claims to determine the validity of the claim. However, it has not performed a cost-benefit analysis to determine whether it is focusing the appropriate level of resources on this effort. Without this, IRS is unable to determine the extent to which refunds associated with invalid EITC claims could be prevented or minimized had IRS devoted more resources to its examination efforts. The Electronic Fraud Detection System (EFDS) is an automated screening tool IRS’ Criminal Investigation Division (CI) uses to identify EITC refund claims with the highest potential to be fraudulent or invalid. CI uses EFDS to score each EITC claim, using a set of screening criteria. CI retains those cases that indicate a high potential for fraud for follow-up and forwards all other cases that score above a certain level to the Examination Branch. For each of its 10 campuses, the Examination Branch determines a set number of cases that it perceives as the workload each campus’s resources can handle. It then refers cases to each campus for examination up to that campus’s established workload amount. During fiscal year 2000, the Examination Branch reduced the number of cases referred by CI for examination by choosing a higher minimum score level for each case and reviewing other factors such as how recently the taxpayer was last examined. Additionally, it discontinued referring cases associated with a particular campus once it reached the determined workload level it established for that campus. Consequently, the number of EITC refund claims subject to examination by IRS was predetermined by the available resources rather than based on an analysis of what the optimum score level should be for selecting cases to examine based on the expected yield at each level and the associated resource cost. The government could be losing billions of dollars through improper refunds associated with invalid EITC claims. For example, in a study of tax year 1997 returns, IRS estimated that of approximately $30.3 billion EITC claims received, about $9.3 billion (30.6 percent) were invalid claims. IRS did not know the exact amount of the related improper refunds, but based on IRS’ fiscal year 1998 refund rate of about 78 percent of EITC claims, we estimate the amount of improper EITC refunds to be about $7.3 billion. In the same study, IRS estimated that it would not be able to recover 84 percent of the total invalid EITC claims. Applying this rate to the refunds portion only, we estimate that $6.1 billion of the improper refunds could be unrecoverable. With such a potential for invalid refunds, IRS must better ensure that it is devoting the appropriate level of resources to examining these claims. IRS’ primary detective controls are its automated matching programs to match tax returns against third-party data. Identified discrepancies may indicate underreported tax liabilities and possible improper refunds, to the extent that the underreporting resulted in refunds being disbursed. IRS has separate automated matching programs for individual and employer tax returns which are performed several months after the returns are filed. However, IRS did not perform follow-up examinations on millions of identified tax returns estimated to have billions of dollars of underreported tax liabilities. As a result, to the extent these taxpayers had received improper refunds by underreporting their taxes, IRS did not pursue recovery of these refunds. Table 3 presents IRS’ workload for the matching program for individual returns referred to as the Automated Underreporter Program (AUR). As shown in this table, in tax years 1996 through 1998, IRS did not investigate over 30 million AUR cases with about $30 billion in estimated underreported taxes which may have also resulted in the issuance of improper refunds. Because IRS did not investigate these cases, the exact amount of underreported taxes due and any resulting improper refunds disbursed are unknown. IRS’ decision to forgo follow-up examinations and collection efforts on potentially underreported tax liabilities, improper refunds, and invalid EITC claims was based on perceived resource constraints. However, as discussed later in this report, IRS’ financial management systems do not currently provide reliable information for cost-benefit analyses. Consequently, IRS management cannot determine whether the cost associated with the level of resources it expends on various refund control projects is commensurate with the benefits that could be realized from such efforts. Additionally, IRS cannot determine whether it is effectively directing its resources to the areas with the most potential benefit. As a result, billions of dollars of improper refunds could be disbursed as a result of invalid EITC claims and underreported tax liabilities and could remain uncollected. This in turn could erode taxpayer confidence in the equity of the tax system and reduce compliance with the tax laws. IRS’ controls over cash, checks, and related hard-copy taxpayer data it receives from taxpayers continue to be inadequate. While IRS has made some improvements, further action and policy changes are needed to further mitigate risks. Without adequate controls, IRS cannot ensure proper safeguarding of assets and taxpayer data. Table 4 summarizes the issues we identified in this area, their effects, and IRS’ actions to address these issues. Most of these issues were also identified in prior years’ audits, for which recommendations have already been made. Appendix II lists these previous recommendations and IRS’ actions to address them. As part of its procedures to determine suitability of an applicant for employment, IRS requires permanent and temporary applicants to undergo a fingerprint prescreening check. During a fingerprint check, an applicant’s fingerprints are processed through the FBI’s national database to identify those with arrest records. However, further review of the disposition of the case is necessary to determine if the applicant was convicted of the crime. We previously reported on several weaknesses related to this fingerprinting process. Although IRS significantly improved the turnaround time for obtaining the fingerprint results, other weaknesses persisted. IRS issued new policies to address these weaknesses. However, we found that the new policies were not consistently applied throughout IRS during fiscal year 2000. Additionally, Treasury Inspector General for Tax Administration (TIGTA) auditors found that the IRS’ current fingerprinting process was ineffective in screening out juvenile applicants with questionable backgrounds. In response to a recommendation we had made previously, IRS issued, on April 3, 2000, a policy that prohibited the hiring and placement of an applicant at any IRS location until the applicant’s fingerprint checks had been received and case disposition evaluated. This policy applied to permanent and temporary employees. However, we found that IRS offices did not consistently comply with this new hiring policy. Out of the approximately 19,600 employees hired during fiscal year 2000, about 4,900 (25 percent) were hired and began working prior to IRS’ receipt and evaluation of their fingerprint checks. As IRS did most of its hiring from October through April in preparation for the peak tax-filing period, the new policy was not in place in time to affect many of these new hires. Nonetheless, there were about 2,700 persons hired after the April 2000 policy was issued, of which 145 (5 percent) were hired and began working with taxpayer receipts and sensitive taxpayer data without IRS first receiving the results of their fingerprint checks. The following table shows, on a monthly basis, the number of persons who were hired and reported for duty without IRS having first received the results of their fingerprint checks out of the total number hired after the issuance of the April 3, 2000, hiring policy. Although the table shows a downward trend in the number of violations after the April 2000 policy, we cannot determine to what extent this is due to compliance with the new policy or due to IRS’ not hiring as many staff in May through September. To compound this problem, IRS staff also did not comply with its April and June 1999 policies which require the fingerprinting of all filing season applicants at the earliest possible time in the job application process. According to IRS’ personnel database, about 2,200 employees out of approximately 19,600 (11 percent) hired during fiscal year 2000 were not fingerprinted until they first reported for duty or several days—and in some instances months—later. The delays in initiating the fingerprinting process delayed IRS management’s receipt of the fingerprint results. This, combined with the pressing need for more resources to meet the increased workload during the tax-filing period, was a contributing cause for new employees entering on duty before the results of fingerprints were received. Consequently, as a result of noncompliance with IRS’ hiring policies, IRS managers could have unknowingly allowed employees with unsuitable backgrounds to handle cash, checks, and sensitive taxpayer information, thus increasing their risk of theft and misuse. In fact, from April through September, of the 145 persons who entered on duty before IRS received their fingerprint checks, 22 (15 percent) were subsequently found to have had potentially unsuitable backgrounds, such as drug use and assault. Additionally, a TIGTA audit completed in May 2000 found that IRS’ fingerprint prescreening procedures were ineffective for juvenile applicants, i.e., those under 18 years of age, due to mitigating circumstances involving the release of juvenile records. IRS campuses often hire high school students to fill short-term positions to process income taxes. IRS’ policy to complete fingerprint prescreening checks applies to all new hires, even short-term temporary employees. As of April 2000, TIGTA found that at the two campuses it reviewed, 192 juveniles were hired to work in the receipts processing areas and all of them had fingerprint checks completed. However, 18 U.S.C. 5038 states that information about a juvenile’s record may not be released when the request for information is related to an application for employment. It further states that responses to such inquiries shall not be different from responses made about persons who have never been arrested. Therefore, the case disposition from any juvenile arrest could not be released or otherwise known. According to TIGTA, because juveniles’ records are sealed, it was not certain whether local authorities, which provide information for the FBI’s national database, forward juvenile arrest records to the FBI. Even if the fingerprint check identified a juvenile arrest record, current laws prevent investigators from determining whether the juvenile was convicted or acquitted. TIGTA recommended that IRS develop a process to more effectively screen out juvenile applicants with questionable backgrounds for receipt processing positions. IRS agreed to look into this matter. IRS’ lack of a process to screen out juvenile applicants with questionable backgrounds could result in IRS’ unknowingly hiring persons with unsuitable backgrounds to process receipts and sensitive taxpayer data, thus increasing the risk of theft. In fact, TIGTA special agents have already investigated juvenile employees for theft of receipts. Given these risks, we agree with TIGTA’s recommendation for IRS to develop procedures to more effectively screen out juvenile employees with questionable backgrounds. We found that the scope of background checks required of lockbox bank employees was inconsistent with IRS’ hiring policy and was less than that required of IRS employees. The Treasury’s Financial Management Service (FMS) contracts with 10 commercial banks to process taxpayers’ payments and tax data for IRS. Lockbox banks are staffed with both permanent bank employees and temporary employees. As previously discussed, the new IRS hiring policy prohibits the hiring and placement of an IRS applicant, for permanent or temporary position, at any IRS location until the applicant’s fingerprint checks have been received and evaluated. Despite the fact that lockbox employees also handle taxpayer receipts and data, IRS’ new hiring policy does not apply to them. At two lockbox banks we visited, we found that 63 permanent employees were hired and began working in fiscal year 2000 prior to the banks’ receipt of their fingerprint checks. We also found that fingerprint checks were not required at all for temporary lockbox employees. Neither the IRS guidelines for lockbox operations nor the FMS contracts with lockbox banks required fingerprint checks for temporary employees of the lockbox. The lockbox guidelines required only a police check for all temporary employees. However, a police check, which is a records check for arrests and legal proceedings, is limited to the jurisdictions that the employee states he or she resided in within the past 7 years. In contrast, the FBI fingerprint checks required of IRS applicants do not depend on the individual to accurately state where he or she lives because the FBI obtains information for its national database from law enforcement agencies. We also found that the length of time it took for the lockbox banks to get the results of fingerprint checks varied widely. For example, officials at one of the lockbox banks we visited informed us they received the results of the fingerprint checks in 8 business days while the officials at a second lockbox bank stated they received the results in 3 to 6 months. As a result of the above weaknesses in lockbox hiring practices, taxpayers and the government were unnecessarily exposed to potential financial losses and fraud that could have occurred if lockbox employees with unsuitable backgrounds were unknowingly hired to process sensitive taxpayer information and receipts. We previously reported on various security weaknesses related to courier services. IRS uses couriers to transport deposits of taxpayer receipts to financial institutions. On March 14, 2000, IRS issued a revision to its minimum courier service requirements for IRS campuses to address the security weaknesses we previously reported. As a result of this new policy, we noted additional improvements over courier security that helped reduce the vulnerability of taxpayer receipts and taxpayer data recorded on checks from theft, loss, or misuse. For example, the revised courier standards limit courier access on campus premises and require campus personnel to deliver the deposits to a designated point of transfer. At two campuses we visited, we observed that the campus personnel complied with this policy. However, some weaknesses still remain. For example, the courier standards require two courier service employees to pick up and deliver deposits in order to increase security and help ensure that such deposits are never left unattended while in the courier service’s custody. At one of the campuses we visited, only one courier showed up to pick up the deposits. According to IRS campus officials, this was because IRS did not directly contract with the courier service. Instead, the contract was between the depository institution and the courier service. Therefore, IRS had less control over the security requirements included in the courier contract. Regardless of who contracts directly with the courier service, IRS has a fiduciary responsibility to the taxpayers and the government to safeguard taxpayer receipts with which it was entrusted. An IRS official stated that IRS plans to issue new guidance that will require all IRS campus courier service contracts to include IRS’ minimum courier security standards, regardless of who contracts for the courier services. Recognizing its responsibility to protect taxpayer information and receipts, IRS has clearly made a concerted effort to address courier security weaknesses by adopting a more stringent requirement on courier security standards. However, unless IRS consistently implements this policy, taxpayers and the government will still be unnecessarily exposed to financial losses. We also found that lockbox banks were not required to have the same level of courier security as IRS campuses. The lockbox courier service requirements are listed in the Lockbox Processing Guidelines. Based on our comparison of the January 2000 lockbox processing guidelines to IRS’ courier requirements in effect during our review, significant requirements from IRS’ courier guidelines were absent from the lockbox processing guidelines. For example, the lockbox guidelines did not require use of two insured couriers nor did they require all courier service employees to pass a limited background investigation. During our site visits at two lockbox banks, we noted that a single courier was used at both locations. IRS officials stated that the fiscal year 2002 lockbox contracts would contain courier standards for lockbox banks consistent with requirements at IRS campuses. However, until these standards are required and implemented, taxpayer receipts and data are unnecessarily exposed to theft and fraud, such as identity theft schemes, while in the custody of the lockbox courier services. Despite some improvements, we continued to find other internal control weaknesses over the safeguarding and accounting of manual payments and taxpayer data. Appendix II lists the improvements IRS made in this area during fiscal year 2000. However, during our fiscal year 2000 visits to various IRS locations and lockbox banks, we found that other previously reported weaknesses, such as the issues outlined in table 4, persisted. For example, we continued to find weaknesses regarding access to receipt processing areas. IRS security guidelines designate the receipts processing area as a restricted area to be accessed only by authorized personnel. As such, this area should be physically secured from the rest of the processing units of the IRS campus. Nonauthorized persons entering the receipt processing area must sign in with a door monitor, wear a special badge, and be escorted. Cleaning personnel are only to be allowed access to this area during operating hours when they can be observed. However, at one campus, a GAO auditor was allowed access through the rear entrance of the receipt processing area by an employee who did not know the auditor, and the auditor had unescorted access once inside. At four field offices, we found similar access problems where entrances to walk-in payment processing areas were left open or were inadequate to prevent nonemployees from entering. In the same TIGTA review discussed earlier, TIGTA found that physical barriers for receipt processing areas at two other campuses were not adequate for various reasons, such as (1) receipt processing areas with walls or partitions that were inadequate to secure the areas and not supplemented by intrusion detecting devices, (2) doors that were left open after hours, and (3) door locks that did not meet minimum security standards. At the same campuses, they also found that cleaning personnel were allowed unescorted access to receipt processing areas during nonoperating hours. At one of these campuses, the security guards did not respond to motion sensor alarms set off by a TIGTA auditor before regular duty hours because, according to the guards, they assumed that the alarms were set off by the janitor who was generally in that area at that time. We have previously reported, and continued to find, receipts in receipt processing areas vulnerable to theft or loss because accountability for them was not always established as soon as they were received and because the receipts were stored in easily accessible containers. As such, physical access controls to these areas are particularly important to reduce the risk of theft of taxpayer receipts and data. The weaknesses cited above unnecessarily expose taxpayer receipts and accompanying data to theft by unauthorized persons. During fiscal year 2000, IRS made progress in improving the reliability of its property and equipment (P&E) inventory records. IRS began implementing a new process for managing and maintaining records for its automated data processing (ADP) P&E, assigned a senior-level official responsibility for management and control of ADP P&E, and conducted an officewide inventory of all P&E. IRS also continued to develop and implement interim procedures to compensate for fundamental deficiencies in its financial accounting system. Specifically, it developed manual procedures to extract the costs of P&E acquisitions from its accounting records. Although these efforts allowed IRS to report in its fiscal year 2000 financial statements a P&E balance that was fairly stated, these compensating procedures were labor intensive and required extensive contractor support to arrive at a reliable P&E balance months after fiscal year-end. Additionally, these procedures did not address long-standing, fundamental weaknesses in IRS’ property and financial systems. As a result, we continued to find problems with (1) the accuracy and reliability of IRS’ P&E inventory records and (2) IRS’ ability to record P&E transactions in its financial system as transactions occur. Until these problems are addressed, IRS will continue to rely on costly and labor-intensive compensating procedures to arrive at a P&E balance that is only reliable for its year-end financial statements. More importantly, the procedures IRS employed during fiscal year 2000 did not provide management with reliable, useful, and timely P&E information throughout the year for day-to-day decision- making, thus hindering IRS’ ability to properly manage $1.3 billion in assets. Table 6 summarizes the issues relating to P&E, along with their effects and IRS’ actions to address these issues. For many years, IRS’ P&E records were not adequate for maintaining accountability over its property. IRS has acknowledged the deficiencies in its property management controls since 1983. In the long-term, IRS plans to acquire and implement a new P&E inventory management system to address the deficiencies in its current P&E inventory systems. In the interim, IRS has taken steps to improve the reliability of its P&E inventory records during fiscal year 2000. However, these interim measures have not been fully implemented, and we continued to find errors in IRS’ P&E inventory records during our fiscal year 2000 financial audit. IRS maintains two P&E inventory systems, one to manage ADP P&E and another to track non-ADP P&E. These systems provide data, such as a description of the item, its location, and current status (e.g., disposed versus in service) that assist property managers and officials in managing property. In an effort to address its long-standing inability to maintain complete and accurate records in the ADP inventory system, IRS issued interim Single Point Inventory Function (SPIF) operating guidelines and procedures in June 2000. SPIF centralized responsibility for managing ADP property and maintaining ADP inventory records into a single dedicated unit at each IRS location, thus establishing clear accountability for the receipt, management, and disposal of ADP assets. Although this was a significant step, we found during our visits to IRS campuses and field offices in September 2000 that SPIF teams had not been fully staffed and SPIF procedures had not been fully implemented at all IRS facilities. Thus, as in prior years, we found that IRS’ procedures for recording P&E acquisitions, disposals, and transfers still did not ensure that transactions were promptly recorded. Specifically, we found that 35 of 220 P&E items we selected from IRS records at 22 sites could not be located at the time of our review. These items were eventually accounted for when IRS later reported that 23 of the items had been disposed of months earlier (including one disposed of in 1998) but IRS had failed to update the records, 8 items were subsequently located, and 4 items were erroneous records of software. Nonetheless, based on our work, we estimate that 16 percent of the items in IRS’ P&E inventory records at September 30, 2000, were erroneously included as IRS assets. The GAO Standards for Internal Control in the Federal Government requires that qualified and continuous supervision be provided to ensure that internal control objectives are achieved. It is particularly important for IRS to have strong management oversight to help compensate for the limitations of its current P&E systems. IRS partially addressed the issue of management oversight in November 1999 by providing its Chief Information Officer (CIO) the authority over and overall responsibility for ownership, management, and control of all ADP property. In addition, SPIF procedures assigned ADP property managers responsibility for reviewing the accuracy and completeness of P&E information. Specifically, the IRS policy states that ADP property managers are responsible for maintaining a management and quality review program to ensure the timeliness, completeness, and accuracy of the inventory records and to conduct annual property management evaluations at selected sites. These types of managerial review serve as a key internal control in ensuring the accuracy, completeness, and timely recording of inventory data that will subsequently be used to prepare reports for management decision-making. However, based on the errors we found during our testing of the P&E inventory records, these managerial reviews did not appear to be effective. Consequently, the information in IRS’ P&E inventory tracking systems was unreliable and fell short of meeting management reporting needs. As in prior years, IRS was unable to record P&E assets and corresponding liabilities in its accounting system as the transactions occurred due to inadequate accounting procedures and systems design flaws. Consequently, IRS hired a contractor who implemented extensive and time- consuming manual procedures to derive a reliable P&E balance for IRS’ financial statements. IRS did not have policies and procedures in accordance with federal accounting standards to identify and record in its general ledger accounts its P&E assets and corresponding liabilities as the transactions occurred. For example, federal accounting standards require agencies to record a capital lease asset and its corresponding liability at the inception of the lease agreement.However, neither IRS’ inventory system nor its accounting system was designed to capture key information on capital leases to enable it to report the asset or the corresponding capital lease liability as the transactions occurred. IRS expensed all property purchases during the year, including major acquisitions such as capital leases, leasehold improvements, and major systems. A contractor then analyzed and extracted from IRS’ automated expense records purchases of P&E, leasehold improvements, major systems, and capital leases based on codes within IRS’ accounting system to derive the fiscal year-end amounts that should have been capitalized as P&E. IRS then recorded adjusting entries to transfer these P&E acquisitions to the appropriate general ledger account. This process was time-consuming and did not always result in accurate information as we found during our review of fiscal year 2000 nonpayroll expenses and P&E transactions. For example: Of 156 statistically sampled nonpayroll expenses we reviewed, 3 transactions totaling $1.7 million that should have been recorded as P&E had not been properly extracted by the contractors from the population of fiscal year 2000 expenses and transferred to the P&E general ledger account. Based on our work, we estimate that the most likely understatement of the P&E balance as a result of P&E transactions being incorrectly recorded as expenses was $50 million, with an upper error limit of $127 million. Of 60 statistically sampled P&E transactions we reviewed, 8 transactions totaling $879,000 were inappropriately identified by the contractors as fiscal year 2000 P&E acquisitions. Two of the 8 transactions were fiscal year 1999 transactions, and the remaining 6 items were non-P&E items that should have remained as expenses. Based on our work, we estimate that the most likely overstatement of the P&E balance as a result of transactions incorrectly recorded as fiscal year 2000 P&E was $61 million, with an upper error limit of $106 million. Additionally, IRS uses financial accounting codes that classify expenses by type to extract P&E, leasehold improvements, major systems, and capital leases from its automated records of expenses. These Sub-Object Class (SOC) codes appear on all basic accounting documents and provide detailed cost data on the types of expenses that are significant to IRS’ operations. However, IRS recorded both capitalizable and noncapitalizable P&E transactions under the same SOC codes. This complicated the process of extracting capitalizable P&E transactions based on SOC codes because additional analysis was required to determine whether the transactions represented an expense or a capitalizable P&E purchase. For example, in fiscal year 2000, the contractor determined that more than $43 million in software license fees, which should have been expensed, were charged to an SOC code defined as capitalized software. IRS’ costly, time-consuming process for determining a year-end P&E balance was necessary because IRS’ procurement system, inventory tracking systems, and the general ledger are not integrated. In an integrated financial management system, the general ledger is supported by subsidiary ledgers, which contain detailed records of transactions and automatically update the appropriate general ledger balances as transactions occur. Therefore, on an on-going basis, detailed records in the subsidiary ledgers should support the P&E balances in the general ledger. However, during fiscal year 2000 IRS did not have subsidiary ledgers for its P&E. Instead, the two inventory tracking systems served as subsidiary records for P&E. However, property acquisitions and dispositions recorded in the inventory tracking systems did not automatically update appropriate P&E balances in the general ledger system because the two systems were not integrated. Additionally, unlike true subsidiary ledgers, the inventory tracking systems did not record the cost of assets that tie to the general ledger balances at a summary level. Consequently, P&E balances recorded in general ledger P&E accounts could not be easily reconciled to IRS’ subsidiary records to verify that such balances were supported by actual assets recorded in the inventory tracking systems. IRS plans to install an integrated financial system by late 2004 to address the design flaws of its current systems. In the meantime, due to the systems and control weaknesses discussed above, IRS management continues to rely on a contractor and a labor-intensive procedure to derive a reliable P&E balance for its financial statements. Because this procedure only provides a reliable balance for the fiscal year-end date, IRS does not have reliable P&E data on an ongoing basis to make operational decisions related to the purchase, disposition, and use of its P&E. Moreover, errors in IRS’ inventory tracking systems continue to compromise IRS management’s ability to safeguard $1.3 billion of government assets. To address weaknesses in the timely recording of P&E transactions while an integrated P&E financial system is being developed, we recommend that IRS implement policies and procedures to record capitalizable acquisition costs for property and equipment, capital leases, leasehold improvements, and major systems in the appropriate P&E general ledger accounts as transactions occur. To ensure that SOC codes facilitate compilation of capitalizable P&E transactions in the proper general ledger asset accounts and, if applicable, lease liability accounts, we recommend that IRS revise the definitions of SOC codes pertaining to P&E or establish new codes so that individual SOC codes cannot be used for both capitalizable purchases (assets) and noncapitalizable purchases (expenses). For example, the SOC code used to record capitalizable software costs should not be used to record noncapitalizable software license fees. In fiscal year 2000, IRS made substantial progress in addressing previously identified budgetary control weaknesses. IRS (1) reduced the number of employees with authority to override automated spending controls; (2) decreased the number, dollar amount, and duration of items held in suspense; and (3) implemented procedures to deobligate funds no longer required for a specific purpose. Despite this progress, IRS’ internal controls were inadequate for providing reasonable assurance that the $8.3 billion in fiscal year 2000 budgetary authority was routinely accounted for, reported, and controlled. Specifically, we found that IRS (1) incurred costs prior to establishing an obligation, (2) inappropriately recorded unrelated activities as adjustments to obligations, and (3) failed to reduce undelivered orders when goods and services were received. As a result, IRS was unable to ensure the reliability of key budgetary information it needs on an ongoing basis to effectively manage its operations and ensure that its resources do not exceed budgetary authority. While these conditions in isolation may not rise to the level of material weakness, collectively they are indications of serious deficiencies in internal controls over appropriated funds. Table 7 summarizes the issues we identified related to obligations and undelivered orders, along with their effects and IRS’ actions to address these issues. In the federal budgeting process, agencies’ operations are funded by appropriations. Appropriations typically provide agencies with budgetary authority, i.e., the legal right to obligate—and ultimately to spend—funds for specific purposes, within a specific period of time, up to a specific amount. An obligation is a definite commitment by an agency of the government, which creates a legal liability to another party. To prevent obligations in excess of available funding, OMB Circular A-34 gives instructions to federal agencies as to when an obligation of funds should be recorded in the agency’s financial system. For example, an obligation for reimbursable travel expenses incident to employee relocation should be recorded when a travel order is approved; an obligation for a contract should be recorded in the month that the contract is let; and an obligation for an order for goods or services is to be recorded at the time the order is placed. In addition, GAO’s Standards for Internal Control in the Federal Government requires that transactions be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. However, during our fiscal year 2000 audit, we found that IRS did not always record obligations in its accounting system prior to incurring costs. For example: IRS received software maintenance services for the period May 1, 2000, through April 31, 2001, totaling $415,000. However, IRS did not generate a purchase order to record the obligation of funds until July 28, 2000— almost 3 months after the services were received. An IRS site accepted delivery of services for which funds were not available at that site. In this instance, a contracting officer at an IRS site ordered services totaling more than $15,000 for transporting and installing systems furniture in June 1999. However, the obligation was not recorded before the cost was incurred. When the voucher was submitted in November 1999, IRS discovered that the amount exceeded what was available to the site at the time the order was placed. Although IRS was able to make up for the deficiency by transferring fiscal year 1999 funds from another site, had there not been funds available at that time IRS may have run the risk of spending more than it was authorized to spend. As a result of not recording obligations in a timely manner, IRS cannot routinely rely on its financial records to provide reliable information on the status of its budgetary resources for day-to-day decision-making. Until the obligation of funds is recorded, the balance in obligations incurred would be understated. This could lead IRS management to believe that the agency has more funding than is actually available. Consequently, IRS management and personnel might enter into additional obligations in excess of the budgetary authority made available by Congress. During fiscal year 2000, IRS recorded certain activities as adjustments to prior years’ obligations that were not valid adjustments to those obligations. In fiscal year 2000, $167 million of the $277 million (over 60 percent) that were recorded in IRS’ accounting system as adjustments to prior years’ obligations were not valid upward or downward adjustments. IRS subsequently adjusted its records to correct these erroneous transactions. However, these errors adversely affected IRS’ ability to routinely report accurate and reliable information on total budgetary resources and obligations. GAO’s Standards for Internal Control in the Federal Government requires that transactions and other significant events be properly classified to maintain their relevance and value to management in controlling operations and making decisions. Furthermore, transactions and events are to be completely and accurately recorded and classified in the summary records from which reports and financial statements are prepared. Adjustments to prior years’ obligations are recorded when the obligation amount that was previously recorded is affected by a subsequent event, such as a change in the price or quantity of goods or services. For example, if an undelivered order for a good was established for $1,000, but the good was delivered in a later year at $1,250, then an upward adjustment of $250 to obligations would be recorded. An upward adjustment would increase obligations incurred and reduce the unobligated balance. Similarly, if an undelivered order was established for $1,000 but the good was delivered in a later year for $750, then a downward adjustment to prior years’ obligations of $250 would be recorded. However, we found that IRS overstated both the upward and downward adjustment accounts during fiscal year 2000. Many activities that were recorded as adjustments to the prior years’ obligations were not actual upward or downward adjustments but were related to changes in accounting codes, travel, and adjustments for doubtful accounts. Of the $277 million in adjustments IRS recorded in its accounting system in fiscal year 2000, $82 million in upward adjustments and $85 million in downward adjustments were not valid adjustments to the prior years’ obligated balance. These errors were attributed to IRS’ accounting system, which, according to IRS personnel, recorded all adjustments that affect a prior year’s appropriation, including those that did not affect the obligated amount, as upward or downward adjustments to prior years’ obligations. Through adjusting entries totaling $167 million, IRS was able to correct these errors in time to prevent the financial statements from being misstated. However, upward adjustments to prior years’ obligations are also reported on the SF133 Report on Budget Execution and Budgetary Resources that federal agencies submit to OMB quarterly as “obligations incurred,” while downward adjustments to prior years’ obligations are reported on the SF133 as “recoveries to prior year obligations.” Because the upward and downward adjustment accounts were misstated during the year, data IRS reported to OMB on its budgetary activities may not be reliable. Specifically, the September 2000 SF133 IRS submitted to OMB misstated both the obligations incurred and recoveries to prior years’ obligations line items. IRS records an undelivered order when it orders a good or service for use in its operations. It then reduces the undelivered order balance and records an expense when the good or service is received. However, we found instances in which IRS did not reduce the balance in undelivered orders when the goods and services were received. As a result, the balance of undelivered orders and accrued expenses were misstated. We tested statistical samples of 83 and 78 transactions from fiscal year 2000 beginning and ending balances of undelivered orders, respectively. For both samples, we found instances in which IRS received goods and services during one fiscal year but did not reduce the undelivered orders balance reflected in its accounting system until the following fiscal year. This was caused, in part, by IRS personnel’s incorrectly recording into its accounting system the dates that the goods and services were received. This resulted in IRS’ overstating the beginning and the ending fiscal year 2000 undelivered order balances and understating accrued expenses. For example: In fiscal year 2000, IRS recorded an obligation and a corresponding undelivered order for computer equipment totaling $7.9 million. As of September 30, 2000, IRS had received equipment totaling $3.4 million. However, its records as of September 30, 2000, still showed that the entire undelivered order amount was still outstanding, i.e., $3.4 million was not yet removed from the undelivered order balance. Telephone support services for the month of September 1999 were entered into the receipt and acceptance system as being received on October 5, 1999. Consequently, the beginning fiscal year 2000 balance in undelivered orders was overstated. IRS failed to remove more than $4.1 million from the ending undelivered order balance for lockbox services received from July through September 2000. Consequently, the fiscal year 2000 ending undelivered order balance was overstated. The errors in the beginning undelivered orders balance totaled $2.9 million, while errors in the ending undelivered orders balance totaled $12.4 million. Based on our work, we estimate (1) the most likely overstatement of the fiscal year 2000 beginning undelivered orders balance as a result of these errors was $65 million, with an upper error limit of $111 million and (2) the most likely overstatement of the ending undelivered orders balance and corresponding understatement of accrued expenses was $47 million, with an upper error limit of $87 million. Because of the deficiencies in controls over the accurate recording of undelivered orders, IRS’ balances in undelivered orders and accrued expenses were misstated during fiscal year 2000. These deficiencies continued to affect IRS’ ability to report reliable, timely, and routine information critical for making sound day-to-day decisions and effectively managing its operations. To ensure effective management of available funding and accurate reporting of obligations, we recommend that IRS perform periodic reviews to monitor and ensure that obligations are promptly established in the accounting system. Such reviews would assist IRS in maintaining accurate and complete records of its obligations, and in reducing the risk of obligations exceeding available funding. To ensure that reported budget data are reliable on a routine basis, we recommend that IRS incorporate into its systems modernization blueprint the capability to differentiate prior-year adjustments between activities that are valid upward and downward adjustments to obligations and activities that are not valid adjustments to obligations. Such actions would help ensure that activities that are not valid adjustments to obligations are not recorded as adjustments to obligations. In fiscal year 2000, IRS revised the format of its statement of net cost and significantly expanded and enhanced the related disclosures in its financial statements to address an issue we had raised in our prior audit regarding the commingling of certain program costs in its financial statements. The resulting presentation appropriately classified the cost of IRS’ programs. However, in fiscal year 2000, as in prior years, IRS was unable to generate reliable financial information on a day-to-day basis to support decision- making. IRS lacked a financial management system that complies with the requirements of the Federal Financial Management Improvement Act of 1996 (FFMIA). In addition, IRS did not record transactions in a timely manner and perform routine reconciliations necessary to ensure the reliability of general ledger data. Finally, IRS lacked an effective system that can report on the full costs of its activities and on cost-based performance measures consistent with the Government Performance and Results Act (GPRA) of 1993. These weaknesses affected IRS’ ability to (1) routinely prepare reliable periodic financial reports, (2) generate routine and reliable cost-based information, (3) accurately determine the amount of revenue collected for specific tax types, and (4) certify excise taxes distributed to trust funds. Collectively, these issues are indications of serious internal control deficiencies and constitute a material weakness in controls over financial reporting. As noted earlier, in fiscal year 2000, due to monumental efforts and extensive workaround processes, IRS was able to produce for the first time combined financial statements that were fairly stated in all material respects. However, the information reported in the financial statements was reliable only for a single point in time. Financial data not subjected to these compensating procedures may not be reliable and cannot be used to effectively manage IRS’ day-to-day operations. Ultimately, Congress, IRS management, and the public do not routinely have timely and accurate information to evaluate IRS’ performance and make informed management and policy decisions. Table 8 summarizes the issues we identified in this area, together with their effects and IRS’ actions to address these issues. As the table above indicates, IRS does not have an adequate financial management system. As a result, IRS is hindered in its ability to produce reliable financial statements and to generate timely and accurate information needed to make management and operational decisions. An adequate financial management system is one that can provide complete, reliable, consistent, timely, and useful financial management information. Such a system comprises, among other elements, an integrated general ledger system using common data elements and transaction processing that is supported by transactional details and a system of internal controls to ensure that reliable data are obtained, maintained, and disclosed in reports. Such a system is also capable of capturing and reporting reliable performance information. However, IRS’ financial management system is made up of two independent general ledgers—custodial and administrative—that are not integrated with each other nor with their supporting records for material balances. Specifically, IRS’ custodial general ledger does not have adequate audit trails for federal taxes receivable, federal tax revenue, or federal tax refunds, while its administrative general ledger lacks audit trails for P&E and program costs. For example, as discussed earlier in this report, the lack of clear traceability between the general ledger and underlying financial transactions required IRS to use extensive ad hoc procedures and statistical methods to derive reliable balances for taxes receivable and other unpaid assessments. In addition, as discussed further below, because of weaknesses in internal controls, IRS could not demonstrate that reported performance indicators were reliable. Consequently, neither of IRS’ two general ledgers complies with the requirements of the U.S. Government Standard General Ledger (SGL) at the transaction level and cannot be used to support the preparation of financial statements without material financial reporting adjustments, nor do they comply with the requirements of FFMIA. One important requirement of an effective financial management system is that it can be relied upon to support the timely production of auditable financial statements. At IRS, this is not the case. Although IRS was able to produce financial statements that were fairly stated in all material respects for fiscal year 2000, these statements required monumental human efforts that extended well after the September 30, 2000, fiscal year-end. In addition, information produced by IRS’ financial management system required billions of dollars in adjustments to derive reliable financial statement balances. As mentioned earlier, substantial adjustments totaling billions of dollars had to be made to reliably report the balance for taxes receivable. These adjustments, as well as the balance in net taxes receivable, were not available until well after the fiscal year had ended. Similarly, fiscal year 2000 administrative activities totaling over $3.7 billion were either recorded in the wrong general ledger accounts or were not yet recorded in IRS’ general ledger as of September 30, 2000. For example, as of fiscal year-end, accrued payroll and depreciation expenses totaling $480 million had yet to be recorded in IRS’ general ledger, while P&E acquisitions that should have been capitalized were recorded as expenses. These activities had to be analyzed and recorded or reclassified, a time- consuming process that took several months to complete. Though IRS achieved an important milestone in receiving an unqualified opinion on its fiscal year 2000 financial statements, the approach used to achieve this goal did not address the underlying purpose of sound financial management as envisioned by the CFO Act—to produce reliable, useful, and timely financial and performance information on a routine basis for day-to-day decision-making. Furthermore, until lasting improvements are achieved, IRS will have to continue to rely on extensive efforts to produce reliable financial statements. During fiscal year 2000, IRS did not timely record transactions and perform the necessary reconciliations to ensure that the data contained in its general ledger systems were up-to-date and accurate. Consequently, IRS did not have reliable, timely, and routine financial information to effectively manage its operations. GAO’s Standards for Internal Control in the Federal Government requires that transactions and events be recorded accurately and timely and that ongoing monitoring occur in the course of normal operations to provide reasonable assurance that financial reporting is reliable. These internal control processes and procedures are crucial to ensuring that an agency’s financial management systems produce information that is reliable, timely, and useful. Without these processes and procedures, a modern and integrated financial management system by itself does not guarantee that an agency will be able to prepare financial statements that are fairly stated and generate financial data that can be relied upon for day-to-day decision- making. During fiscal year 2000, IRS’ internal controls over financial reporting were not consistent with these standards. Specifically, IRS did not record material transactions in the general ledger until months after they occurred. For example: Depreciation expenses totaling more than $350 million were not recorded throughout the year, but only at year-end. As a result, the balance in depreciation was inaccurate at interim periods during the year. Imputed financing costs totaling nearly $400 million were not recorded in the general ledger throughout the year but rather as a lump sum amount several months after the fiscal year-end. While IRS made the necessary adjustments to produce reliable year-end financial statements, the balance for imputed financing costs was incorrect throughout fiscal year 2000. In addition, IRS lacked adequate policies and procedures for ensuring that financial data would be adequately reviewed on an ongoing basis. Specifically, IRS management informed us that it did not have policies and procedures requiring systematic reviews and analyses of account balances at interim periods. Consequently, errors and omissions were allowed to arise without prompt detection and correction, and adjusting entries that should have been made throughout the year were allowed to build up until they became material and time-consuming to correct. IRS also did not have policies and procedures requiring reconciliation between its proprietary and budgetary accounts during fiscal year 2000. IRS had to make adjustments totaling more than $160 million several months after the fiscal year-end to bring the net cost of operations derived from the budgetary accounts and the net cost of operations derived from the proprietary accounts into agreement. The failure to maintain accurate and up-to-date financial data impeded IRS management in its ability to use the general ledger as a reliable source of financial data at interim periods to make managerial and operational decisions. IRS did not track the cost accounting information needed to prepare cost- based performance information consistent with GPRA. Deficiencies in IRS’ systems and internal controls discussed above mean that IRS cannot routinely generate reliable financial and performance data for cost-benefit analyses. This could adversely affect IRS management’s and Congress’ ability to make informed management decisions related to resource allocation and other aspects of IRS’ operations throughout the year. The Joint Financial Management Improvement Program’s (JFMIP) System Requirements for Managerial Cost Accounting requires that, at a minimum, agencies have cost accounting information to support the aggregation of financial information related to programs and projects, each of which could have several levels, such as subprograms. In order for IRS to aggregate cost information by program and project to conform to this standard, it must first capture costs at the detail level as they are incurred. However, IRS did not have a systematic process in place to capture costs at the project level during fiscal year 2000. Though IRS had a Project Cost Accounting Subsystem (PCAS) coding structure that can capture personnel costs at the detailed project and subproject level, IRS did not require that all of its employees use PCAS to itemize the time spent on specific projects on their time cards. Consequently, during fiscal year 2000, IRS staff did not use PCAS codes for time charged to either of IRS’ two largest appropriations, which collectively accounted for 74 percent of IRS’ budgetary resources. Similarly, except for information technology projects, PCAS did not collect nonpersonnel costs such as equipment depreciation, rent, and utilities by projects and subprojects. At year-end, IRS extracted data from its accounting system, imported the data into a database, and used a spreadsheet to allocate these nonpersonnel costs to the different projects and subprojects in an effort to derive reliable net operating cost data for the Statement of Net Cost. However, these data were not available until months after the fiscal year-end, were only reliable for a single point in time, and thus were not available on an ongoing basis for management purposes. The failure to fully and accurately capture cost at the project level affected IRS’ ability to produce reliable cost data. Specifically, IRS was unable to report on the costs associated with each of the 15 key performance indicators it reported in the “Management Discussion and Analysis” that accompanied its fiscal year 2000 financial statements. As a result, IRS cannot be consistent with GPRA in reporting cost-based performance measures related to its various programs. In addition, IRS was unable to provide evidence that supervisory review was performed to ensure that the performance indicators, and data used to derive these indicators, were complete, accurate, and reliable. For example, IRS did not have documentation demonstrating that a responsible official had reviewed the data to ensure that all data that should be collected for a specific performance indicator was collected, and that only pertinent data was included. This increases the risk that any errors or omissions affecting IRS’ key performance indicators will not be detected and corrected in a timely manner. Finally, IRS faces an additional challenge in the fact that its custodial and administrative general ledgers are independent of each other and are not integrated. Since cost data are primarily contained in the administrative general ledger while critical performance data comes from the custodial general ledger, IRS needs to be able to link these two general ledgers before it can calculate reliable, cost-based performance measures. IRS plans to implement a major portion of an integrated financial and taxpayer account management system by fiscal year 2005. Consequently, this link between the custodial and administrative general ledgers will not occur before then. IRS continues to be unable to determine the specific amount of revenue it actually collects for Social Security, Hospital Insurance, individual income taxes, and excise tax trust funds. These conditions exist primarily because (1) at the time of payment, taxpayers are not required to provide information on the specific taxes that they are paying and (2) IRS’ systems are not capable of capturing such information. Although the tax returns, which the taxpayers file months after the deposits are made, do contain a breakdown on the type of tax, this information pertains only to the amount of the tax liability and not to the amount of taxes paid to IRS. This condition restricts IRS’ ability to report actual collections of significant taxes, such as Social Security, that would be of interest to many parties, including Congress. IRS is developing a system to capture detailed collection information by type of tax and plans to initiate a study, in 3 to 4 years, to gauge taxpayers’ readiness to provide such detailed information. Because data are not available for the allocation of excise taxes to the appropriate trust funds when deposits are made, IRS uses a certification process that is complex, cumbersome, and prone to error in order to distribute excise tax receipts to the respective trust funds. In response to our previous reports, IRS implemented procedures to improve controls over the certification process. However, we continued to find weaknesses in the excise tax certification process. For example, due to delays in recording tax return information in its systems, the amount IRS certified to the Highway Trust Fund for the quarter ended September 30, 1999, included nearly $346 million in collections from previous quarters. These delays resulted in delays in transferring these amounts to the trust funds, thus reducing the amount of interest income the trust funds earn on these receipts. This reduction in interest income could adversely affect distributions of trust fund receipts to the states because the amounts distributed would be based on inaccurate data. To reduce the magnitude of year-end adjustments and assist IRS in improving the reliability of its financial data on a routine basis, we recommend that IRS develop, document, and implement policies and procedures to require monthly reconciliations between proprietary and budgetary accounts so that differences can be identified promptly and, if necessary, adjusted; routine reviews and analyses of general ledger account balances to promptly identify errors and omissions; and recording corrections and adjusting entries throughout the year to reduce the magnitude of year-end adjustments and improve the reliability of interim financial data. To improve IRS’ ability to collect and report on the full costs of its activities, we recommend that IRS implement policies and procedures to require that all employees itemize the time spent on specific projects on allocate nonpersonnel costs to programs and activities routinely throughout the year. To provide assurance on the reliability of performance data, we recommend that IRS document reviews performed to validate that performance data are complete, accurate, and reliable. Many of the issues presented throughout this report have existed for several years, and IRS has noted that the ultimate solution to many of these issues is modernization of its systems. As part of this modernization initiative, IRS plans to implement a new financial system that includes a cost accounting module as well as integrated administrative and custodial general ledgers that are supported by subsidiary ledgers containing the transactional details for key accounts such as taxes receivable and property and equipment. The modernized environment is expected to provide IRS with, among other things, the ability to (1) track and report on the status of each unpaid assessment category, amount, and taxpayer, (2) record P&E transactions in its general ledger accounts as they occur, and (3) prepare cost-benefit analyses and cost-based performance measures. However, these systems will take years to implement. IRS continues to make progress in addressing its financial management challenges. The strong commitment and dedication to financial management reform by IRS senior management has played a crucial role in the progress the agency has made to date and is critical for future improvements. IRS has developed many workaround processes that resulted in its ability to produce reliable financial statements for fiscal year 2000. However, these processes take considerable time, effort, and expense and do not fix many of the fundamental financial management issues that continue to plague the agency. Until these issues are addressed, IRS cannot achieve the overriding objective of the CFO Act and other reform legislation enacted during the last decade—to produce reliable, useful, and timely financial and performance information for day-to-day decision-making. In commenting on a draft of this report, IRS agreed that, in order to improve financial management, it must sustain a high level of effort to implement solutions that will address systems deficiencies and internal control weaknesses. IRS also provided information regarding past and current initiatives to address GAO’s audit recommendations. For example, during fiscal year 2000, IRS (1) routinely reconciled its fund balance, (2) reviewed and managed suspense items, (3) eliminated unneeded obligations, (4) installed and used live-scan fingerprint equipment, and (5) implemented procedures and processes to improve the reliability of P&E records. IRS also noted that it has undertaken additional initiatives to address remaining internal control deficiencies. For example, it noted that it implemented a new inventory system for its P&E and enforced standards to improve inventory practices, developed a standard checklist and conducted monthly security reviews in fiscal year 2001, and hired additional staff to support the master file extract process. We will follow up during our fiscal year 2001 audit to assess the effectiveness of these initiatives. While agreeing with the overall thrust of our report, IRS disagreed with some of the specific report findings and recommendations. Specifically, in the area of P&E, IRS disagreed that in the short term, property acquisitions should be recorded as capital assets as the transactions occur. IRS noted that, until an integrated property management system is acquired, it should continue the current practice of recording property acquisitions as expenses and then transferring these expenses to capital assets in the general ledger after a review process. We disagree. As our report states, IRS’ process for deriving a reliable P&E balance for its annual financial statements involves the use of extensive manual procedures by a contractor to extract and analyze IRS’ data on expenses to identify items that should be classified as assets. This process is time-consuming, occurs months after the acquisition of the assets, and only provides a reliable balance for P&E for a single point in time. This process does not provide IRS with reliable P&E data on an ongoing basis for use in operational decision-making. IRS also disagreed with our conclusions regarding the timeliness of IRS’ recording of obligations. IRS believed that the 2 instances we cited of IRS’ failure to timely record obligations were isolated and thus did not constitute a material weakness in controls over appropriated funds. The 2 instances cited in our report were illustrative examples of IRS’ failure to record obligations before goods and services were received, and did not represent the total number of errors found in our testing. In fact, we found 10 instances in which IRS failed to record obligations before goods and services were received. These exceptions were brought to the attention of IRS staff and management, in writing, throughout the audit. These 10 instances together represent more than isolated instances of IRS’ not recording obligations before goods and services are received. It is also important to note that we did not characterize in our report the issue of IRS not timely recording obligations in and of itself as a material weakness. However, taken collectively, this, plus other issues in the area of appropriated funds management, constitute a material weakness in IRS’ internal controls over its appropriated funds that preclude IRS from providing reasonable assurance that material misstatements would be prevented or detected on a timely basis. In addition, IRS disagreed that we should include its failure to properly record adjustments to obligations as a material weakness. IRS also requested that we reconsider our recommendation that it include in its systems modernization blueprint the capability to differentiate between valid and invalid adjustments to prior-year obligations. IRS stated that the issue stemmed from our and its different interpretations of the definition of upward and downward adjustments. IRS believed that it had successfully resolved this issue because it made audit adjustments we proposed prior to issuing its final fiscal year 2000 financial statements and stated that it would continue to make these adjustments in the future. While we agree that IRS made the audit adjustments we proposed to its financial statements, we disagree that this issue has been resolved and should be excluded from our report. As discussed above, our report does not characterize this issue in and of itself as a material weakness. As stated in our report, we requested that IRS make adjustments in instances involving changes in accounting codes and travel entries that do not meet the definition of upward and downward adjustments. IRS made these adjustments to its fiscal year 2000 financial statements. These adjustments eliminated the type of known errors found during our testing and reduced the dollar amounts of these accounts to levels not considered material for purposes of fairly presenting the financial statements as a whole. These adjustments do not, however, correct the underlying problems that gave rise to the errors in these accounts that required adjusting. Also, while IRS took exception to our recommendation, it noted that the CFO has included this issue in the functional requirements for IRS’ new financial management system, and that, in the short term, it will continue to make these adjustments manually. These corrective actions, if effectively implemented, should address our recommendation regarding this issue. We will evaluate the effectiveness of these actions during the fiscal year 2001 audit. IRS also contested our including an example in the report to illustrate its failure to record the liability for goods and services when received. IRS stated that it had entered into an agreement with us to exclude invoices received after November 30, 2000, from fiscal year 2000 audit procedures. As the invoice for this particular transaction was received on December 6, 2000, IRS believed that this transaction fell outside the agreed-to cutoff date and should thus not be cited. We disagree with IRS’ characterization of what was agreed to. The agreement between IRS and us related to our testing of subsequent disbursements. In previous years, we tested disbursements made within the 3 months following fiscal year-end to identify transactions that should have been, but were not, recorded as a transaction in the year under audit. In fiscal year 2000, we agreed to reduce the test period to 2 months following the fiscal year-end, that is, we would test only subsequent disbursements made from October 1 through November 30 after the fiscal year-end. However, the particular example in our report that IRS is taking issue with was identified during our testing of IRS’ ending undelivered orders balance—this was separate and apart from the testing of subsequent disbursements. Further, lockbox services are recurring transactions covered by 5-year contracts. Consequently, IRS had the capability to accrue for these services without waiting for the invoice. As our report states, the most likely misstatement of the ending undelivered orders balance resulting from the failure to timely record receipt of undelivered orders was $47 million, with an upper error limit of $87 million. The magnitude of these errors reinforces the need for IRS to act to ensure that goods and services are recorded when received. With respect to financial reporting, IRS took issue with our findings that material inaccuracies were found in the fiscal year 2000 financial statements and that these inaccuracies were not effectively detected in IRS’ review of these financial statements. IRS disagreed that these findings should be cited as a material reporting weakness. IRS further stated that it was aware of only two material adjustments we proposed that fell within the purview of financial reporting. Again, we disagree. Each of the internal control deficiencies over financial reporting cited in our report do not individually constitute a material weakness—it is the combination of these deficiencies that constitutes a material weakness. Further, as our report states, IRS’ draft financial statements contained material inaccuracies and the review procedures instituted by IRS were not effective in identifying and addressing errors and omissions material to the financial statements. For example, the first two draft financial statements prepared in January and early February 2001 omitted a material footnote comparing IRS’ Statement of Budgetary Resources with the President’s Budget as required by U.S. generally accepted accounting principles and OMB 97-01, despite the fact that we had indicated to IRS in October 2000 that the footnote was necessary. An effective review procedure would have identified this material omission. Further, we proposed not 2, but 14 audit adjustments, which IRS accepted and recorded. The aggregate absolute value impact of these adjustments was (1) $160 million to assets and liabilities, (2) $140 million to net cost, and (3) $227 million to the statements of financing and budgetary resources. In the area of refunds, IRS disagreed with our finding that IRS does not screen all EITC claims through EFDS. IRS stated that all EITC claims are run through the EFDS program, which prioritizes returns according to criteria that were based on the 1997 EITC Compliance study. We agree that all cases with EITC refund claims are run through the EFDS program by IRS’ Criminal Investigation Division and assigned a score to assist in prioritizing which cases to work. CI, in turn, refers cases above a certain score to the Examination Branch for examination. However, the Examination Branch only examines a subset of those cases referred for examination based upon its perceived level of available resources without collecting the data necessary to determine whether it is focusing the appropriate level of resources on this effort. Without such data, IRS is unable to determine the extent to which refunds associated with invalid EITC claims could be prevented or minimized had IRS devoted more resources to its examination efforts. We have modified our report to provide a more detailed explanation of the EITC examination selection process. IRS also disagreed with our recommendation that it implement policies and procedures requiring all employees to itemize their time on their time cards. IRS stated that it currently tracks itemized information for most employees through its functional tracking systems. We will follow up during our fiscal year 2001 audit to assess the adequacy of this approach. Again, we recognize that IRS achieved an important milestone in producing for the first time combined financial statements in fiscal year 2000 that were fairly stated in all material respects. However, as we state in our report, the tremendous efforts undertaken by IRS staff and management to produce reliable financial statements do not result in reliable, useful, and timely financial and performance information IRS needs for decision- making on an ongoing basis. This approach does not address the underlying financial management and operational issues that adversely affect IRS’ ability to effectively fulfill its responsibilities as the nation’s tax collector. As we have reported for several years, long-term and systematic improvements in IRS’ processes and systems are needed to address the management challenges we have identified. During fiscal year 2000, IRS demonstrated a strong commitment to address the operational and financial management issues raised by us in previous financial statement audits. It successfully implemented a number of initiatives to address outstanding financial-related recommendations and laid the groundwork for continued sustainable improvements in financial management. We will continue to work closely with IRS to build on the improvements made in fiscal year 2000 and to achieve sustained progress in these areas. The complete text of the IRS’ Deputy Commissioner for Operations’ response to this report is reprinted in appendix III. This report contains new recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations. You should send your statement to the Senate Committee on Governmental Affairs and the House Committee on Government Reform within 60 days after the date of this report. A written statement also must be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made over 60 days after the date of this report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Appropriations; Senate Committee on Finance; Senate Committee on Governmental Affairs; Senate Committee on the Budget; Subcommittee on Treasury, General Government, and Civil Service, Senate Committee on Appropriations; Subcommittee on Taxation and IRS Oversight, Senate Committee on Finance; Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, Senate Committee on Governmental Affairs; House Committee on Appropriations; House Committee on Ways and Means; House Committee on Government Reform; House Committee on the Budget; Subcommittee on Government Efficiency, Financial Management, and Intergovernmental Relations, House Committee on Government Reform; and Subcommittee on Oversight, House Committee on Ways and Means. In addition, we are sending copies of this report to the Chairman and Vice- Chairman of the Joint Committee on Taxation, the Secretary of the Treasury, the Director of the Office of Management and Budget, the Chairman of the IRS Oversight Board, and other interested parties. Copies will be made available to others upon request. This report was prepared under the direction of Steven J. Sebastian, Acting Director, Financial Management and Assurance, who can be reached at (202) 512-3406. If I can be of further assistance, please call me at (202) 512- 2600. As part of our audit of IRS’ fiscal year 2000 financial statements, we evaluated IRS’ internal controls and its compliance with selected provisions of laws and regulations, and we followed up on the status of open recommendations from prior financial audits and related financial management reports. We designed our audit procedures to test relevant controls and included tests for proper authorization, execution, accounting, and reporting of transactions. Specifically, we Tested selected statistical samples of unpaid assessment, revenue, refund, accounts payable, accrued expense, payroll, nonpayroll and undelivered order transactions. These statistical samples were selected primarily to substantiate, and in some cases derive, balances and activities reported on IRS’ financial statements. Consequently, dollar errors or amounts can and have been statistically projected to the population of transactions from which they were selected. In testing these samples, certain attributes were identified that indicated significant deficiencies in the design or operation of internal control. These attributes can be and have been statistically projected to the appropriate populations. Conducted analytical testing procedures where appropriate. Evaluated relevant internal controls over financial reporting and reviewed the overall form and content of the financial statements. Reviewed the IRS contractor’s methodology and procedures for compiling the fiscal year 2000 P&E additions. Tested detailed purchasing transactions of P&E, major systems, capital leases, and leasehold improvements and a statistical sample of P&E items at several IRS locations. Compared EITC amounts from IRS and Treasury reports, and reviewed EITC audit cases. Tested transactions that represent the underlying basis of amounts distributed to various trust funds, primarily the Highway Trust Fund and Airport and Airway Trust Fund. Reviewed the IRS certifications of excise tax revenue distributed to the Highway Trust Fund and Airport and Airway Trust Fund. Reviewed IRS’ reconciliations and specific controls over refund processing and financial reporting. Observed physical safeguards over cash and checks received and processed at campuses, field offices, and lockbox banks. Interviewed and observed management and personnel at campuses, field offices, and lockbox banks. Reviewed relevant audit reports from the Office of the Treasury Inspector General for Tax Administration. Reviewed IRS’ fiscal year 2000 Federal Managers’ Financial Integrity Act Annual Assurance Statement, IRS’ January 2001 letter to Congress responding to Recommendations to Improve Financial and Operational Management (GAO-01-42), and IRS’ April 2001 Remediation Plan. We performed our work from April 2000 through February 2001 in accordance with U.S. generally accepted government auditing standards. We have also issued a management letter addressing additional matters that we identified during our fiscal year 2000 audit regarding accounting procedures and internal controls that could be improved, and we have issued separate reports on computer security issues. Appendix II consists of two tables. Table 9 lists our recommendations from prior financial statement audits and related financial management reports. Table 10 lists new recommendations resulting from our fiscal year 2000 audit. From our previous reports on IRS’ financial activities, 85 recommendations remained open as of the date of this report (1 through 85 in table 9). We are closing 24 of these recommendations primarily because IRS has addressed them or because they are being superseded by updated or more detailed recommendations. Thus, 61 of these prior recommendations remain open. The column “GAO status of recommendations” in table 9 lists the current status of these recommendations and indicates whether we believe that each open recommendation could be addressed in the short term (such as enforcing policies that are not being consistently followed) or whether each would require long-term changes for fundamentally deficient financial systems or other more extensive changes. We are also making 10 new recommendations in this report, numbered 86 through 95 in table 10, with short- or long-term changes also indicated. Consequently, 71 recommendations are open as of the date of this report. We have highlighted in bold the 9 recommendations we consider of highest priority for IRS to address. These are recommendations 6, 8, 17, 47, 48, 49, 53, 54, and 55. We will continue to monitor IRS’ progress toward addressing each of the recommendations in this appendix during our fiscal year 2001 audit. In addition to those named above, Tuyet-Quan Thai, Delores Lee, Richard Harada, George Jones, and William Cordrey made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 (automated answering system).
This is a follow-on to GAO's report on its audit of the Internal Revenue Service's (IRS) fiscal year 2000 financial statements. Many of the issues raised in this report have persisted for years. IRS believes that the solution to many of these issues may lie in systems modernization. IRS plans to implement a new financial system that includes a cost accounting module as well as integrated administrative and custodial general ledgers that are supported by subsidiary ledgers containing the transactional details for key accounts, such as taxes receivable and property and equipment. IRS continues to make progress in addressing its financial management challenges. The strong commitment by IRS senior management to financial management reform has played a crucial role in the agency's progress so far and is critical for future improvements. IRS has developed many workaround processes that allowed it to produce reliable financial statements for fiscal year 2000. However, these processes take considerable time, effort, and expense and do not fix many of the fundamental financial management issues that continue to plague IRS.
Federal regulation, like taxing and spending, is one of the basic tools of government used to implement public policy. Agencies publish thousands of regulations each year to achieve goals such as ensuring that workplaces, air travel, and food are safe; that the nation’s air, water, and land are not polluted; and that the appropriate amounts of taxes are collected. Because regulations affect so many aspects of citizens’ lives, it is crucial that rulemaking procedures and practices be effective and transparent. Over the last decade, at the request of Congress, we have prepared over 60 reports and testimonies reviewing crosscutting aspects of those rulemaking procedures and practices. I would like to focus my remarks on topics or themes emerging from this work that are most relevant to this subcommittee’s oversight agenda. These include: (1) regulatory analysis and accountability requirements, (2) presidential and congressional oversight of agency rulemaking, and (3) notice and comment rulemaking procedures under the Administrative Procedure Act (APA). Congress has frequently asked us to evaluate the effectiveness of requirements that were initiated over the past 25 years to improve the federal regulatory process. Among the goals of these requirements are reducing regulatory burdens, requiring more rigorous regulatory analysis, and enhancing oversight of agencies’ rulemaking. We have paid repeated attention to agencies’ compliance with some of these requirements, such as ones in the Paperwork Reduction Act (PRA), Regulatory Flexibility Act (RFA), Unfunded Mandates Reform Act (UMRA), Congressional Review Act (CRA), and Executive Order 12866 on regulatory planning and review. Our reviews identified at least four overall benefits associated with existing regulatory analysis and accountability requirements: Encouraging and facilitating greater public participation in rulemaking—Some initiatives have encouraged and facilitated greater public participation and consultation in rulemaking. Opportunities for the public to communicate with agencies by electronic means have expanded and requirements imposed by some regulatory reform initiatives encouraged additional consultation with the parties that might be affected by rules under development by federal agencies. Improving the transparency of the rulemaking process—The initiatives implemented over the past 25 years have helped to make the rulemaking process more open by facilitating public access to information, providing more information about the potential effects of rules and available alternatives, and requiring more documentation and justification of agencies’ decisions. Although we have often recommended that more could be done to increase transparency, we have also highlighted the valuable contribution made when agencies had particularly clear and complete documentation supporting their rulemaking. Increasing the attention directed to rules and rulemaking—Our reports have pointed out that oversight of agencies’ rulemaking from various sources—including Congress, the administration, and GAO, among others—can result in useful changes to rules. Furthermore, we noted that agencies’ awareness of this added scrutiny may provide an important indirect effect, potentially leading to less costly, more effective rules. Increasing expectations regarding the analytical support for proposed rules—The analytical requirements that have been added over the years have raised the bar regarding the information and analysis needed to support policy decisions underlying regulations. Such requirements have also prompted agencies to provide more data on the expected benefits and costs of their rules and encouraged the identification and consideration of available alternatives. On the other hand, we also identified at least four recurring reasons why the requirements imposed by such initiatives have not been more effective: Lack of clarity and other weaknesses in key terms and definitions— Unclear terms and definitions can affect the applicability and effectiveness of certain requirements. For example, we have frequently cited the need to clarify key terms in RFA. RFA’s analytical requirements, which are intended to help address concerns about the impact of rules on small entities, do not apply if an agency head certifies that a rule will not have a “significant economic impact on a substantial number of small entities.” However, RFA neither defines this key phrase nor places clear responsibility on any party to define it consistently across the government. Not surprisingly, we found that agencies’ compliance with RFA varied widely from one agency to another and agencies had different interpretations of RFA’s requirements. In another example, our review of agencies’ compliance with a requirement to adjust civil monetary penalties for inflation under the Federal Civil Penalties Inflation Adjustment Act (Inflation Adjustment Act), indicated that both a lack of clarity and apparent shortcomings in some of the Act’s provisions appeared to have prevented agencies from keeping their penalties in pace with inflation. Although we recommended changes to address these shortcomings, to date Congress has not acted on our recommendations. Limited scope and coverage of various requirements—Simply put, some rulemaking requirements apply to few rules or require little new analysis for the rules to which they apply. For example, we pointed out last year that the relatively small number of rules identified as containing mandates under UMRA could be attributed in part to the 14 different exemptions, exclusions, and other restrictions on the identification of regulatory mandates under the Act. We also observed unintended “domino” effects of making certain requirements contingent on other requirements. For example, some requirements only apply to rules for which an agency published a notice of proposed rulemaking, but, as I will discuss later, we found that agencies issue many final rules without associated proposed rules. In addition, the requirement for “look back” reviews of existing regulations under section 610 of RFA only applies if the agency determined that its rule would have a significant economic impact on a substantial number of small entities. When RFA was amended in 1996 by the Small Business Regulatory Enforcement Fairness Act (SBREFA) to require additional actions, such as preparing compliance guides and convening advocacy review panels for certain rules, this appeared to prompt a reduction in the number of rules that the Environmental Protection Agency identified as affecting small entities (and would therefore trigger the new requirements). Uneven implementation of the initiatives’ requirements—Sometimes, agencies’ implementation of various requirements serves to limit their effectiveness. For example, a recurring message in our reports over the years is that some agencies’ economic analyses need improvement. Our reviews have found that economic assessments that analyze regulations prospectively are often incomplete and inconsistent with general economic principles. Moreover, the assessments are not always useful for comparisons across the government, because they are often based on different assumptions for the same key economic variables. In our recent report on UMRA, we noted that parties from various sectors expressed concerns about the accuracy and completeness of agencies’ cost estimates, and some also emphasized that more needed to be done to address the benefits side of the equation. Our reviews have found that not all benefits are quantified and monetized by agencies, partly because of the difficulty in estimation. In our recent report on the Paperwork Reduction Act, we noted that the Act requires chief information officers (CIO) to review and certify information collections to help minimize collection burdens, but our analysis of case studies showed that CIOs provided these certifications despite often missing or inadequate support from the program offices sponsoring the collections. A predominant focus on just one part of the regulatory process—More analytical and procedural requirements have focused on agencies’ development of rules than on other phases of the regulatory process, from the underlying statutory authorization, through effective implementation and monitoring of compliance with regulations, to the evaluation and revision of existing rules. While rulemaking is clearly an important point in the regulatory process, these other phases also help determine the effectiveness of federal regulation. Closely related to regulatory analysis and accountability requirements are efforts to enhance the oversight of agencies’ rulemaking by Congress, the President, and the judiciary. In general, efforts to increase presidential influence and authority over the regulatory process, primarily through the mechanism of Office of Management and Budget (OMB) review of agencies’ rulemaking, have become more significant and widely used over the years. However, our reviews suggest that mechanisms to increase congressional influence, such as procedures for Congress to disapprove proposed rules, appear to have been less able to influence changes in agencies’ rules to date. We have not done work that directly addresses issues regarding judicial review of agencies’ rulemaking. In our September 2003 report on OMB’s role in reviews of agencies’ rules, we recounted the history of centralized review of agencies’ regulations within the Executive Office of the President. We noted the expansion of OMB’s role in the rulemaking process over the past 30 years under various executive orders. Although not without controversy, this expansion of a centralized regulatory review function has become well established. OMB’s role in the rulemaking process has been further enhanced by provisions in various statutes (such as the Information Quality Act, PRA, and UMRA) that placed additional oversight responsibilities on OMB. The formal process by which OMB currently reviews agencies’ proposed and final rules has essentially remained unchanged since Executive Order 12866 was issued in 1993, but we reported on several changes in OMB policies in recent years that affected the process, such as increased emphasis on economic analysis, stricter adherence to the 90-day time limit for reviews of agencies’ draft rules, and improvements in the transparency of the OMB review process (although some elements of the transparency of that process are still unclear). Based on our review of OMB and agency dockets on 85 rules reviewed by OMB during a 1-year period, we also showed that OMB’s reviews sometimes result in significant changes to agencies’ draft rules. The Congressional Review Act was enacted as part of SBREFA in 1996 to better ensure that Congress has an opportunity to review, and possibly reject, rules before they become effective. CRA established expedited procedures by which members of Congress may disapprove agencies’ rules by introducing a resolution of disapproval that, if adopted by both Houses of Congress and signed by the President, can nullify an agency’s rule. However, this disapproval process has only been used once, in 2001, when Congress disapproved the Department of Labor’s rule on ergonomics. CRA also requires agencies to file final rules with both Congress and GAO before the rules can become effective. Our role under CRA is to provide Congress with a report on each major rule (for example, those with a $100 million impact on the economy) that includes GAO’s assessment of the issuing agency’s compliance with the procedural steps required by various acts and executive orders governing the rulemaking process. Although we reported that agencies’ compliance with CRA requirements was inconsistent during the first years after its enactment, compliance improved. Congress also passed the Truth in Regulating Act (TIRA) in 2000 to provide a mechanism for it to obtain more information about certain rules. TIRA contemplated a 3-year pilot project during which GAO would perform independent evaluations of “economically significant” agency rules when requested by a chairman or ranking member of a committee of jurisdiction of either House of Congress. However, during the 3-year period contemplated for the pilot project, Congress did not enact any specific appropriation to cover TIRA evaluations, as called for in the Act, and the authority for the 3-year pilot project expired on January 15, 2004. Therefore, we have no information on the potential effectiveness of this mechanism. Some of our reviews have touched on agencies’ compliance with APA. APA established the most long-standing and broadly applicable federal requirements for informal rulemaking, also known as notice and comment rulemaking. Among other things, APA generally requires that agencies publish a notice of proposed rulemaking (NPRM) in the Federal Register. After giving interested persons an opportunity to comment on the proposed rule, and after considering the public comments, the agency may then publish the final rule. However, APA provides exceptions to these requirements, including cases when, for “good cause,” an agency finds that notice and comment procedures are “impracticable, unnecessary, or contrary to the public interest,” and interpretive rules. When agencies use the “good cause” exception, APA requires that they explicitly say so and provide a rationale for the exception’s use when the rule is published in the Federal Register. An agency’s claim of an exception to notice and comment procedures is subject to judicial review. The legislative history of APA, and associated case law, generally reinforce the view that the “good cause” exception should be narrowly construed. In addition, the Administrative Conference of the United States (ACUS) encouraged agencies to use notice and comment procedures where not strictly required by APA and recommended that Congress eliminate or narrow several of the exceptions in APA. In various reports over the years, we noted that agencies had not issued NPRMs before publishing certain final rules. When we reported on this issue in 1998, we estimated that about half of all final actions published in 1997 had been issued without an associated NPRM. Although many of those final actions without proposed rules were minor actions, 11 of the 61 major rules (for example, those with an impact of $100 million or more) did not have NPRMs. While we have not studied this issue in depth since 1998, we continued to find the prevalence of final rules without proposed rules during our reviews. For example, during our review of the identification of federal mandates under UMRA in 2001 and 2002, we found that 28 of the 65 major rules that imposed new requirements on nonfederal parties did not have NPRMs. We have also reported that agencies’ explanations for use of APA’s “good cause” exception were sometimes unclear, for example, simply stating that notice and comment would delay rules that were, in some general way, in the public interest. We noted that, when agencies publish final rules without NPRMs, the public’s ability to participate in the rulemaking process is limited. Also, several regulatory reform requirements that Congress has enacted during the past 25 years—such as RFA’s and UMRA’s analytical requirements—use as their trigger the publication of an NPRM. Therefore, it is important that agencies clearly explain why notice and comment procedures are not followed. At the same time, the number of final rules without proposed rules appears to reflect, at least in part, agencies’ acceptance of procedures for noncontroversial and expedited rulemaking actions known as “direct final” and “interim final” rulemaking that were previously recommended by ACUS. Although we observed some differences in how agencies implement direct final rulemaking, it generally involves publication of a rule with a statement that the rule will be effective on a particular date unless an adverse comment is received within a specified period of time (such as 30 days). For example, the Federal Aviation Administration (FAA) has used direct final rulemaking procedures nearly 40 times this year to modify the legal descriptions of controlled airspace at various airports across the country. FAA issued these modifications as direct final rules because it anticipated no adverse or negative comments. FAA also noted that these regulations only involve an established body of technical regulations for which frequent and routine amendments are necessary to keep them operationally current. If an adverse comment is received on a direct final rule, the agency withdraws the direct final rule and may publish the rule as a proposed rule under normal notice and comment procedures. For interim rulemaking, an agency issues a final rule without an NPRM that is generally effective immediately, but with a postpromulgation opportunity for the public to comment. Public comments may persuade the agency to later revise the interim rule. Although neither direct nor interim final rulemaking are specifically mentioned in APA, both may be viewed as an application of the “good cause” exception in APA. Direct and interim final rules appear to account for hundreds of the final regulatory actions published each year. In our report on final rules without proposed rules, we identified 718 interim and direct final regulatory actions published by agencies during 1997. A quick search of recent Federal Register notices showed that agencies published over 550 notices in 2004 for which the subject rulemaking action was identified as a direct final, interim final, or interim rule. Through October 21 of this year, agencies had published nearly 400 such notices. Direct final rules accounted for almost 60 percent of these notices. The findings and emerging issues reported in our body of work on federal rulemaking suggest a few areas on which the subcommittee might consider taking legislative action or sponsor further study: generally reexamine rulemaking structures and processes, including the address previously identified weaknesses of existing statutory promote additional improvements in the transparency of agencies’ open a broader examination of how developments in information technology might affect the notice and comment rulemaking process. As we have noted in several products this year, we believe that it is appropriate and necessary to begin taking a broad reexamination of what the federal government does and how it does it, especially given the fiscal challenges facing the country. Although the federal rulemaking process does not have much direct impact on the federal budget—given that most costs of regulation fall on regulated parties and their customers or clients— we have testified that it nevertheless should be part of that reexamination. We recognize that a successful reexamination of the base of the federal government will entail multiple approaches over a period of years. No single approach or reform can address all of the questions and program areas that need to be revisited. However, as we have previously stated, federal regulation is a critical tool of government, and regulatory programs play a key part in how the federal government addresses many of the country’s needs. This subcommittee has already begun such a reexamination through its current oversight agenda, and ACUS, if funded, might well play a valuable role in carrying out the detailed research that will be needed. One emerging trend that any such reexamination should take into account is the evolution of the markets and industries that federal agencies regulate. Changes in the regulatory environment, especially the growing influence of the global economy, have implications for federal rulemaking procedures and practices. For example, agency officials pointed out to us in 1999 the growing importance of international standards and standard- setting bodies, alongside the role of international agreements, in producing certification standards of interest and importance to American businesses. More recently, international developments regarding global harmonization of regulatory standards, chemical risk-assessment requirements, Internet governance issues, and compliance with capital standards and requirements for financial institutions have attracted attention in the regulatory arena. More specifically, Congress might want to revisit APA in view of changes in agencies’ practices over time, such as greater use of interim and direct final rulemaking for certain regulations. For example, we observed that some agencies differed in their policies and practices regarding direct final rulemaking. Whether there should be one standard approach to such rulemaking by federal agencies is an open question. In addition, although direct final rulemaking had been viewed by ACUS as permissible under the APA, ACUS nevertheless suggested that Congress may wish to expressly authorize the process to alleviate any uncertainty and reduce the potential for litigation. With regard to interim final rulemaking, ACUS had similarly recommended that, when APA is reviewed, Congress amend the Act to mandate use of postpromulgation comment procedures for rules issued under the “good cause” exception. Our prior reviews have identified many opportunities to revisit and refine existing regulatory requirements. Although progress has been made to implement recommendations we raised in past reports, there are still unresolved issues. We still believe, for example, that the promise of RFA may never be realized until key terms and definitions, such as “substantial number of small entities,” are clarified and/or an entity with the authority and responsibility to do so is established. Similarly, we believe that civil penalties are an important element of regulatory enforcement and deterrence, but we found that agencies are unable to fully adjust their penalties for inflation under the provisions of current law. Congressional action is needed to address these issues. As pointed out earlier, we have identified many positive developments regarding the transparency of the regulatory process, but more could be done. For example, additional attention could be paid to agencies’ explanations for statements or certifications that certain requirements do not apply. This is another area that might merit additional study of available options. Some uses of exemptions, such as agencies’ claims that a rule does not contain a federal mandate as defined by UMRA or that a proposed rule has no federalism impacts, do not require the agency to provide any more support than the certification itself. Other uses, such as claims of “good cause” to publish final rules without proposed rules, require agencies to provide a clear statement and explanation (although even here we noted that sometimes agencies’ explanations were vague). This raises the question of whether there should be a more demanding requirement for agencies to essentially “show their work” behind such certifications, and, if so, what form such requirements might take. One emerging trend we have observed in our work is the expanded role of technology-based innovations in enhancing the regulatory process. Agencies’ use of the Internet and other technologies to enhance the regulatory process has rapidly increased in importance. In about 5 years, we have gone from reporting on and encouraging the early development of some innovative technologies in support of rulemaking to reporting on the implementation of governmentwide e-government initiatives, such as Regulations.gov and the centralized electronic docket for executive branch agencies. The increased use of technology-based innovations may provide opportunities to transform the rulemaking process, not simply to replace “paper” processes with electronic versions. Continued study is therefore warranted of how such initiatives can open additional opportunities for public participation in and access to information about federal rulemaking, as well as how information technology can be used to improve the federal government’s ability to analyze public comments. Mr. Chairman, this concludes my prepared statement. Once again, I appreciate the opportunity to testify on these important issues. I would be pleased to address any questions you or other members of the committee might have at this time. If additional information is needed regarding this testimony, please contact J. Christopher Mihm, Managing Director, Strategic Issues, at (202) 512-6806 or mihmj@gao.gov. Electronic Rulemaking: Progress Made in Developing Centralized E- Rulemaking System. GAO-05-777. Washington, D.C.: September 9, 2005. Regulatory Reform: Prior Reviews of Federal Regulatory Process Initiatives Reveal Opportunities for Improvements. GAO-05-939T. Washington, D.C.: July 27, 2005. Economic Performance: Highlights of a Workshop on Economic Performance Measures. GAO-05-796SP. Washington, D.C.: July 2005. Paperwork Reduction Act: New Approach May Be Needed to Reduce Government Burden on Public. GAO-05-424. Washington, D.C.: May 20, 2005. Unfunded Mandates: Views Vary About Reform Act’s Strengths, Weaknesses, and Options for Improvement. GAO-05-454. Washington, D.C.: March 31, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Electronic Government: Federal Agencies Have Made Progress Implementing the E-Government Act of 2002. GAO-05-12. Washington, D.C.: December 10, 2004. Unfunded Mandates: Analysis of Reform Act Coverage. GAO-04-637. Washington, D.C.: May 12, 2004. Paperwork Reduction Act: Agencies’ Paperwork Burden Estimates Due to Federal Actions Continue to Increase. GAO-04-676T. Washington, D.C.: April 20, 2004. Rulemaking: OMB’s Role in Reviews of Agencies’ Draft Rules and the Transparency of Those Reviews. GAO-03-929. Washington, D.C.: September 22, 2003. Electronic Rulemaking: Efforts to Facilitate Public Participation Can Be Improved. GAO-03-901. Washington, D.C.: September 17, 2003. Civil Penalties: Agencies Unable to Fully Adjust Penalties for Inflation Under Current Law. GAO-03-409. Washington, D.C.: March 14, 2003. Regulatory Flexibility Act: Clarification of Key Terms Still Needed. GAO- 02-491T. Washington, D.C.: March 6, 2002. Regulatory Reform: Compliance Guide Requirement Has Had Little Effect on Agency Practices. GAO-02-172. Washington, D.C.: December 28, 2001. Federal Rulemaking: Procedural and Analytical Requirements at OSHA and Other Agencies. GAO-01-852T. Washington, D.C.: June 14, 2001. Regulatory Flexibility Act: Key Terms Still Need to Be Clarified. GAO-01- 669T. Washington, D.C.: April 24, 2001. Regulatory Reform: Implementation of Selected Agencies’ Civil Penalties Relief Policies for Small Entities. GAO-01-280. Washington, D.C.: February 20, 2001. Regulatory Management: Communication About Technology-Based Innovations Can Be Improved. GAO-01-232. Washington, D.C.: February 12, 2001. Regulatory Flexibility Act: Implementation in EPA Program Offices and Proposed Lead Rule. GAO/GGD-00-193. Washington, D.C.: September 20, 2000. Electronic Government: Government Paperwork Elimination Act Presents Challenges for Agencies. GAO/AIMD-00-282. Washington, D.C.: September 15, 2000. Regulatory Reform: Procedural and Analytical Requirements in Federal Rulemaking. GAO/T-GGD/OGC-00-157. Washington, D.C.: June 8, 2000. Certification Requirements: New Guidance Should Encourage Transparency in Agency Decisionmaking. GAO/GGD-99-170. Washington, D.C.: September 24, 1999. Federalism: Previous Initiatives Have Little Effect on Agency Rulemaking. GAO/T-GGD-99-131. Washington, D.C.: June 30, 1999. Regulatory Accounting: Analysis of OMB’s Reports on the Costs and Benefits of Federal Regulation. GAO/GGD-99-59. Washington, D.C.: April 20, 1999. Regulatory Flexibility Act: Agencies’ Interpretations of Review Requirements Vary. GAO/GGD-99-55. Washington, D.C.: April 2, 1999. Regulatory Burden: Some Agencies’ Claims Regarding Lack of Rulemaking Discretion Have Merit. GAO/GGD-99-20. Washington, D.C.: January 8, 1999. Federal Rulemaking: Agencies Often Published Final Actions Without Proposed Rules. GAO/GGD-98-126. Washington, D.C.: August 31, 1998. Regulatory Management: Implementation of Selected OMB Responsibilities Under the Paperwork Reduction Act. GAO/GGD-98-120. Washington, D.C.: July 9, 1998. Regulatory Reform: Agencies Could Improve Development, Documentation, and Clarity of Regulatory Economic Analyses. GAO/RCED-98-142. Washington, D.C.: May 26, 1998. Regulatory Reform: Implementation of Small Business Advocacy Review Panel Requirements. GAO/GGD-98-36. Washington, D.C.: March 18, 1998. Congressional Review Act: Implementation and Coordination. GAO-T- OGC-98-38. Washington, D.C.: March 10, 1998. Regulatory Reform: Agencies’ Section 610 Review Notices Often Did Not Meet Statutory Requirements. GAO/T-GGD-98-64. Washington, D.C.: February 12, 1998. Unfunded Mandates: Reform Act Has Had Little Effect on Agencies’ Rulemaking Actions. GAO/GGD-98-30. Washington, D.C.: February 4, 1998. Regulatory Reform: Changes Made to Agencies’ Rules Are Not Always Clearly Documented. GAO/GGD-98-31. Washington, D.C.: January 8, 1998. Regulatory Reform: Agencies’ Efforts to Eliminate and Revise Rules Yield Mixed Results. GAO/GGD-98-3. Washington, D.C.: October 2, 1997. Regulatory Reform: Implementation of the Regulatory Review Executive Order. GAO-T-GGD-96-185. Washington, D.C.: September 25, 1996. Regulatory Flexibility Act: Status of Agencies’ Compliance. GAO/GGD-94- 105. Washington, D.C.: April 27, 1994. E-Rulemaking officials and the e-Rulemaking Initiative Executive Committee considered three alternative designs and chose to implement a centralized e- Rulemaking system based on cost savings, risks, and security. Officials relied on an analysis of the three alternatives using two cost and risk assessment models and a comparison of the alternatives to industry best practices. Prior to completing this analysis, officials estimated the centralized approach would save about $94 million over 3 years. They said when they developed this estimate, there was a lack of published information about costs related to paper or electronic rulemaking systems. They used their professional judgment and information about costs for developing and operating EPA’s paper and electronic systems, among other things, to develop the estimate. EPA’s basis for selecting a with other agencies and agency views of that collaboration, and whether EPA used key management practices when developing the system. E-Rulemaking officials extensively collaborated with rulemaking agencies and most officials at the agencies we contacted thought the collaboration was effective. E-Rulemaking officials created a governance structure that included an executive committee, advisory board, and individual work groups that discussed how to develop the e-Rulemaking system. We contacted 14 of the 27 agencies serving on the advisory board and most felt their suggestions affected the system development process. Agency officials offered several examples to support their views, such as how their recommendations for changes to the system’s design were incorporated. The daunting challenges that face the nation in the 21 century establish the need for the transformation of government and demand fundamental changes in how federal agencies should meet these challenges by becoming flatter, more results-oriented, externally focused, partnership- oriented, and employee-enabling organizations. This testimony addresses how the long-term fiscal imbalance facing the United States, along with other significant trends and challenges, establish the case for change and the need to reexamine the base of the federal government; how federal agencies can transform into high-performing organizations; and how multiple approaches and selected initiatives can support the reexamination and transformation of the government and federal agencies to meet these 21 century challenges. Long-term fiscal challenges and other significant trends and challenges facing the United States provide the impetus for reexamining the base of the federal government. Our nation is on an imprudent and unsustainable fiscal path driven by known demographic trends and rising health care costs, and relatively low revenues as a percentage of the economy. Unless we take effective and timely action, we will face large and growing structural deficit shortfalls, eroding our ability to address the current and emerging needs competing for a share of a shrinking budget pie. At the same time, policymakers will need to confront a host of emerging forces and trends, such as changing security threats, increasing global interconnectedness, and a changing economy. To effectively address these challenges and trends, government cannot accept all of its existing programs, policies, functions, and activities as “givens.” Reexamining the base of all major existing federal spending and tax programs, policies, functions, and activities offers compelling opportunities to redress our current and projected fiscal imbalances while better positioning government to meet the new challenges and opportunities of this new century. In response, agencies need to change their cultures and create the capacity to become high-performing organizations, by implementing a more results- oriented and performance-based approach to how they do business. To successfully transform, agencies must fundamentally reexamine their business processes, outmoded organizational structures, management approaches, and, in some cases, missions. GAO has hosted several forums to explore the change management practices that federal agencies can adopt to create high-performing organizations. For example, participants at a GAO forum broadly agreed on the key characteristics and capabilities of high-performing organizations, which can be grouped into four themes: a clear, well-articulated, and compelling mission; focus on needs of clients and customers; strategic management of people; and strategic use of partnerships. www.gao.gov/cgi-bin/getrpt?GAO-05-830T. To view the full product, including the scope and methodology, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov. A successful reexamination of the base of the federal government will entail multiple approaches over a period of years. The reauthorization, appropriations, oversight, and budget processes should be used to review existing programs and policies. However, no single approach or institutional reform can address the myriad of questions and program areas that need to be revisited. GAO has recommended certain other initiatives to assist in the needed transformations. These include (1) development of a governmentwide strategic plan and key national indicators to assess the government’s performance, position, and progress; (2) implementing a framework for federal human capital reform; and (3) proposing specific transformational leadership models, such as creating a Chief Operating Officer/Chief Management Official with a term appointment at select agencies. Standards: The information collection— Is necessary for the proper performance of agency functions. Avoids unnecessary duplication. Reduces burden on the public, including small entities. Uses language that is understandable to respondents. Will be compatible with respondents’ recordkeeping practices. Indicates period for which records must be retained. Gives required information (e.g., whether response is mandatory). Was developed by an office with necessary plan and resources. Uses appropriate statistical survey methodology (if applicable). www.gao.gov/cgi-bin/getrpt?GAO-05-424. Makes appropriate use of information technology. To view the full product, including the scope and methodology, click on the link above. For more information, contact Linda Koontz at (202) 512-6240 or koontzl@gao.gov. The total is not always 12 because not all certifications applied to all collections. The Unfunded Mandates Reform Act of 1995 (UMRA) was enacted to address concerns about federal statutes and regulations that require nonfederal parties to expend resources to achieve legislative goals without being provided federal funding to cover the costs. UMRA generates information about the nature and size of potential federal mandates on nonfederal entities to assist Congress and agency decision makers in their consideration of proposed legislation and regulations. However, it does not preclude the implementation of such mandates. The parties GAO contacted provided a significant number of comments about UMRA, specifically, and federal mandates, generally. Their views often varied across and within the five sectors we identified (academic/think tank, public interest advocacy, business, federal agencies, and state and local governments). Overall, the numerous strengths, weaknesses and options for improvement identified during the review fell into several broad themes, including UMRA-specific issues such as coverage and enforcement, among others, and more general issues about the design, funding, and evaluation of federal mandates. First, UMRA coverage was, by far, the most frequently cited issue by parties from the various sectors. Parties across most sectors that provided comments said UMRA’s numerous definitions, exclusions, and exceptions leave out many federal actions that may significantly impact nonfederal entities and should be revisited. Among the most commonly suggested options were to expand UMRA’s coverage to include a broader set of actions by limiting the various exclusions and exceptions and lowering the cost thresholds, which would make more federal actions mandates under UMRA. However, a few parties, primarily from the public interest advocacy sector, viewed UMRA’s narrow coverage as a strength that should be maintained. At various times in its 10-year history, Congress has considered legislation to amend various aspects of the act to address ongoing questions about its effectiveness. Most recently, GAO was asked to consult with a diverse group of parties familiar with the act and to report their views on (1) the significant strengths and weaknesses of UMRA as the framework for addressing mandate issues and (2) potential options for reinforcing the strengths or addressing the weaknesses. To address these objectives, we obtained information from 52 organizations and individuals reflecting a diverse range of viewpoints. GAO analyzed the information acquired and organized it into broad themes for analytical and reporting purposes. Second, parties from various sectors also raised a number of issues about federal mandates in general. In particular, they had strong views about the need for better evaluation and research of federal mandates and more complete estimates of both the direct and indirect costs of mandates on nonfederal entities. The most frequently suggested option to address these issues was more post-implementation evaluation of existing mandates or “look backs.” Such evaluations of the actual performance of mandates could enable policymakers to better understand mandates’ benefits, impacts and costs among other issues. In turn, developing such evaluation information could lead to the adjustment of existing mandate programs in terms of design and/or funding , perhaps resulting in more effective or efficient programs. GAO makes no recommendations in this report. www.gao.gov/cgi-bin/getrpt?GAO-05-454. To view the full product, including the scope and methodology, click on the link above. For more information, contact Orice M. Williams at (202) 512-5837, or williamso@gao.gov. Going forward, the issue of unfunded mandates raises broader questions about assigning fiscal responsibilities within our federal system. Federal and state governments face serious fiscal challenges both in the short and longer term. As GAO reported in its February 2005 report entitled 21st Century Challenges: Reexamining the Base of the Federal Government (GAO-05-325SP), the long-term fiscal challenges facing the federal budget and numerous other geopolitical changes challenging the continued relevance of existing programs and priorities warrant a national debate to review what the government does, how it does business and how it finances its priorities. Such a reexamination includes considering how responsibilities for financing public services are allocated and shared across the many nonfederal entities in the U.S. system as well. Highlights of GAO-03-409, a report to the Senate Committee on Governmental Affairs and the House Committee on Government Reform Civil penalties are an important element of regulatory enforcement, allowing agencies to punish violators appropriately and to serve as a deterrent to future violations. In 1996, Congress enacted the Inflation Adjustment Act to require agencies to adjust certain penalties for inflation. GAO assessed federal agencies’ compliance with the act and whether provisions in the act have prevented agencies from keeping their penalties in pace with inflation. Congress may wish to consider amending the act to (1) require or permit agencies to adjust their penalties for lost inflation; (2) make the calculation and rounding procedures more consistent with changes in inflation; (3) permit agencies with exempt penalties to adjust them for inflation; and (4) give some agency the responsibility to monitor compliance and provide guidance. As of June 2002, 16 of 80 federal agencies with civil penalties covered by the Inflation Adjustment Act had not made the required initial adjustments to their penalties. Nineteen other agencies had not made required subsequent adjustments, and several other agencies had made incorrect adjustments. The act does not give any agency the authority to monitor compliance or to provide guidance to agencies. More important, several provisions of the act have prevented some agencies from fully adjusting their penalties for inflation. One provision limited the agencies’ first adjustments to 10 percent of the penalty amounts, even if the penalties were decades old and hundreds of percent behind inflation. The resultant “inflation gap” can never be corrected under the statute and grows with each subsequent adjustment. (The figure below illustrates the effect of the cap on one agency’s $1,000 penalty set in 1958.) Also, the act’s calculation and rounding procedures require agencies to lose a year of inflation each time they adjust their penalties, and can prevent some agencies from making adjustments until inflation increases by 45 percent or more (i.e., 15 years or more at recent rates of inflation). Finally, the act exempts penalties under certain statutes from its requirements entirely. Consequently, more than 100 exempted penalties have declined in value by 50 percent or more since Congress last set them. www.gao.gov/cgi-bin/getrpt?GAO-03-409. To view the full report, including the scope and methodology, click on the link above. For more information, contact Victor Rezendes (202) 512-6806 or rezendesv@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Federal regulation is one of the basic tools of government used to implement public policy. Agencies publish thousands of regulations each year to achieve goals such as ensuring that workplaces, air travel, and food are safe; that the nation's air, water, and land are not polluted; and that the appropriate amount of taxes are collected. Because regulations affect so many aspects of citizens' lives, it is crucial that rulemaking procedures and practices be effective and transparent. GAO, at the request of Congress, has prepared over 60 reports and testimonies during the past decade that review aspects of federal rulemaking procedures and practices. This testimony summarizes some of the general findings and themes that have emerged from GAO's body of work on federal regulatory processes and procedures, including areas on which Congress might consider taking legislative action or sponsoring further study. GAO's prior reports and testimonies contain a variety of recommendations to improve various aspects of rulemaking procedures and practices. GAO's prior evaluations highlighted both benefits and weaknesses of rulemaking procedures and practices in areas such as (1) regulatory analysis and accountability requirements, (2) presidential and congressional oversight of agency rulemaking, and (3) notice and comment rulemaking procedures under the Administrative Procedure Act (APA). GAO's reviews identified at least four overall benefits associated with existing regulatory analysis and accountability requirements: encouraging and facilitating greater public participation in rulemaking; improving the transparency of the rulemaking process; increasing the attention directed to rules; and increasing expectations regarding the analytical support for proposed rules. On the other hand, GAO identified at least four recurring reasons why such requirements have not been more effective: unclear key terms and definitions; limited scope and coverage; uneven implementation by agencies; and a predominant focus on just one part of the regulatory process. With regard to executive branch and congressional oversight of agencies' rulemaking, GAO has noted that efforts to increase presidential influence and authority over the regulatory process, through mechanisms such as the Office of Management and Budget's reviews of agencies' rulemaking, have become more significant over the years. However, mechanisms intended to increase congressional influence, such as procedures for disapproval of regulations under the Congressional Review Act, appear to have been less able to influence changes in agencies' rules to date. GAO's reviews of agencies' compliance with rulemaking requirements under APA pointed out that agencies often did not published notices of proposed rulemaking (to solicit public comments) before issuing final rules, including some major rules with an impact of $100 million or more on the economy. APA provides exceptions to notice and comment requirements for "good cause" and other reasons, but GAO noted that agencies' explanations for use of such exceptions were sometimes unclear. Also, several analytical requirements for proposed rules do not apply if an agency does not publish a proposed rule. However, some of the growth in final rules without proposed rules appeared to reflect increased use of "direct final" and "interim final" procedures intended for noncontroversial and expedited rulemaking. The findings and emerging issues reported in GAO's body of regulatory work suggested four areas on which Congress might consider taking action or studying further: (1) generally reexamining rulemaking structures and processes, (2) addressing previously identified weaknesses of existing statutory requirements, (3) promoting additional improvements in the transparency of agencies' rulemaking actions, and (4) opening a broader examination of how developments in information technology might affect the notice and comment rulemaking process.
Section 324 of the NDAA for Fiscal Year 2014 requires DOD to establish a policy setting forth the programs and priorities of the department for the retrograde, reconstitution, and replacement of units and materiel used to support overseas contingency operations. The policy is to take into account national security threats, combatant command requirements, current readiness of military department operating forces, and risk associated with strategic depth and the time necessary to reestablish required personnel, equipment, and training readiness in such operating forces. Section 324 further requires that DOD’s policy include the following elements: Establishment and assignment of responsibilities and authorities within the department for oversight and execution of the planning, organization, and management of the programs to reestablish the readiness of redeployed operating forces; Guidance concerning priorities, goals, objectives, timelines, and resources to reestablish the readiness of redeployed operating forces in support of national defense objectives and combatant command requirements; Oversight reporting requirements and metrics for the evaluation of DOD and military department progress on restoring the readiness of redeployed operating forces in accordance with the policy; and A framework for joint departmental reviews of military services’ annual budgets proposed for retrograde, reconstitution, or replacement activities, including an assessment of the strategic and operational risk assumed by the proposed levels of investment across DOD. Additionally, section 324 requires DOD to submit a plan for implementation of the policy for retrograde, reconstitution, and replacement that contains the following elements: The assignment of responsibilities and authorities for oversight and execution of the planning, organization, and management of the programs to reestablish the readiness of redeployed operating forces; Establishment of priorities, goals, objectives, timelines, and resources to reestablish the readiness of redeployed operating forces in support of national defense objectives and combatant command requirements; A description of how the plan will be implemented, including a schedule with milestones to meet the goals of the plan; and An estimate of the resources—by military service and by year—that are required to implement the plan, including an assessment of the risks assumed in the plan. DOD is to provide an update on progress toward meeting the goals of the plan not later than one year after submission, and annually thereafter. In its response to the requirements of the NDAA for Fiscal Year 2014, instead of developing new policies for retrograde and reset of operating forces used to support overseas contingency operations, DOD relied on three existing guidance documents as its policy for retrograde and reset activities in support of overseas contingency operations. However, the guidance does not incorporate key elements of leading practices for sound strategic management planning of these efforts. Further, the department has not used consistent and reliable information or descriptions for retrograde and reset to facilitate consistent and accurate budget reporting to Congress. In response to the requirements in the NDAA for Fiscal Year 2014 for DOD to establish a policy related to retrograde and other activities, in its reports to congressional defense committees, DOD identified existing guidance documents that inform retrograde and reset. These reports did not develop new policy for retrograde and reset activities. For example, the November 2014 report indicates that three existing strategic-level policy and guidance documents inform the department’s retrograde and reset efforts, among other things: the Quadrennial Defense Review, Guidance for the Employment of the Force, and the Defense Planning Guidance. The report also highlighted the military services’ current activities for some of these areas, to include some funding information related to overseas contingency operations and reset, in the context of resources required. Similarly, the April 2015 report DOD submitted to the congressional defense committees also describes the military services’ current activities—to include, for example, budget information related to reset—and provides a progress update on some of the information submitted in the previous year’s report. As in the November 2014 report, the April 2015 follow-up report provides broad information concerning each of the military services’ efforts concerning various activities, such as retrograde, reset, and readiness. In addition to the two reports, DOD identified other guidance related to retrograde and reset. For example, in 2013 the Assistant Secretary of Defense for Logistics and Materiel Readiness issued a memorandum for Afghanistan that officials from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics indicated to us as providing the department’s policy guidance for retrograde. In addition to departmental documents addressing retrograde and reset, U.S. Central Command has issued orders and annexes that address retrograde. The DOD guidance documents identified as the strategic framework for retrograde and reset do not incorporate key elements for sound strategic management planning. GAO’s leading practices work has shown that sound strategic management planning can enable organizations to identify and achieve long-term goals and objectives. We have identified six elements of strategic management planning that are key for establishing a comprehensive, results-oriented strategic planning framework. These elements establish that an organization’s strategic management planning framework should include, for example, a mission statement and long-term goals. Elements of sound strategic management planning also correspond to requirements in section 324 of the NDAA for Fiscal Year 2014 related to retrograde and other efforts. For example, whereas an element of sound strategic management planning calls for the setting of specific policy, programmatic, and management goals, section 324 of the NDAA for Fiscal Year 2014 calls for the required policy to include guidance concerning priorities, goals, objectives, timelines, and resources, among other things, and for the implementation plan to establish them. Our review of the three documents (i.e., the Quadrennial Defense Review, Guidance for the Employment of the Force, and Defense Planning Guidance) referenced in DOD’s November 2014 report as providing the department’s strategic policy and guidance for retrograde and other activities, including reset, found that they do not contain the elements to facilitate the strategic management planning of these efforts. For example, the Guidance for the Employment of the Force was the only document of the three that mentioned retrograde: once in the context of funding resources and a second time to address U.S. Transportation Command’s responsibilities to support retrograde planning. In addition, although all three documents mentioned reset, they did so only in general terms. For example, the Quadrennial Defense Review states that DOD will need time and funding to reset the joint force as the department transitions from operations in Afghanistan, but generally does not expand on this point. Reset is also mentioned in the Guidance for the Employment of the Force, but it offers no specificity about reset activities. None of the documents include a mission statement that addresses retrograde or reset activities. Further, long-term goals for retrograde and reset are not outlined in any of the three documents, which is an element, among others, of strategic management planning, and could improve DOD implementation of section 324. An official from the Office of the Under Secretary of Defense for Policy told us that the Guidance for the Employment of the Force is not a strategic policy document for retrograde or reset. Similarly, other documents that DOD officials directed us to as providing policy guidance for retrograde lacked key elements necessary for the sound strategic management planning of this effort. Our review of U.S. Central Command’s 2011 fragmentary order on the retrograde of equipment from Afghanistan found that it provided information concerning tasks, such as additional retrograde plans; metrics to track equipment retrograde from Afghanistan; and factors that could affect retrograde operations. However, the order does not include a mission statement. Although a later version of the fragmentary order contains a mission statement, it is specific to operations in Afghanistan and does not, therefore, constitute the department’s comprehensive vision concerning the retrograde of equipment from all overseas contingency areas. Likewise, though the August 2013 memorandum from the Assistant Secretary of Defense for Logistics and Materiel Readiness on the retrograde and disposition of equipment in Afghanistan includes some information on equipment retrograde, it does not include key elements, such as a mission statement and long-term goals, necessary for the strategic management planning of retrograde to inform plans across the department. DOD officials stated that they believed the Quadrennial Defense Review and other strategic-level documents provide the necessary policy and guidance to inform the department’s efforts. However, without a strategic policy for retrograde and reset that incorporates key elements of strategic management planning, DOD cannot ensure that its efforts to develop retrograde and reset guidance provide the necessary strategic planning framework to inform the military services’ plans for retrograde and reset. We also found that DOD’s guidance is not consistent in identifying what information DOD and the services are to use in budget reporting on retrograde and reset activities. DOD emphasizes the use of consistent terms across department documents. Specifically, it is DOD policy to improve communication and mutual understanding within the department through the standardization of military and associated terminology; and that the DOD components use the Department of Defense Dictionary of Military and Associated Terms when preparing department documents, such as policy and strategy. Also, Standards for Internal Control in the Federal Government specific to information and communication state that for an entity to run and control its operations, it must have relevant, reliable, and timely communications. Information is needed throughout the agency to achieve all of its objectives. However, we found differences in how DOD guidance and other documents refer to retrograde and reset, particularly with respect to what they include in the description. For example, the Department of Defense Dictionary of Military and Associated Terms describes retrograde as a process for the movement of equipment and materiel, while June 2015 guidance from the Office of the Secretary of Defense indicates that the DOD components should include all retrograde requirements, including those for base closure, equipment, and people for future budget estimates. Later, in the same budget guidance, the components are directed to describe costs related to equipment retrograde as part of a briefing. Similarly, descriptions of reset and what it includes are inconsistent across departmental documents. The Department of Defense Dictionary of Military and Associated Terms defines reset as a set of actions to restore equipment to a desired level of combat capability commensurate with a unit’s future mission. A 2013 joint publication, referenced by the definition, and a 2007 memorandum from the Deputy Under Secretary of Defense for Logistics and Materiel Readiness expand upon what reset includes, using similar language. However, our review of a fiscal year 2014 Overseas Contingency Operations Budget justification document found that it did not provide information regarding what reset includes consistent with these descriptions. Specifically, the 2013 joint publication and 2007 memorandum identify reset as generally including repair, replacement, and recapitalization of equipment, while the fiscal year 2014 Overseas Contingency Operations Budget justification document indicates that reset includes the repair and replacement of equipment as well as the replenishment of munitions consumed, destroyed, or damaged due to combat operations. Furthermore, DOD Comptroller officials told us that they include replenishment of ammunition along with repair and replacement when reporting reset budget information to Congress, but they do not include costs for the recapitalization of equipment. In December 2009, DOD’s Resource Management Decision 700 directed the DOD Comptroller, in coordination with various components, to publish a DOD definition of reset for use in the DOD overseas contingency budgeting process. This definition was to be submitted to the Deputy Secretary of Defense for approval by January 2010. In 2011, because the department had not published a definition of reset for use in DOD’s budget process, we recommended that the DOD Comptroller take action concerning the Resource Management Decision 700 to develop and publish a DOD definition for reset. DOD concurred with this recommendation and commented that the definition of reset would be incorporated into an update of its DOD Financial Management Regulation. As of October 2015, however, the DOD Comptroller had not published a definition for reset. A DOD Comptroller official told us that the reset definition had not been published due to delays in the finalization and approval of the definition’s language. Further, we found that the current DOD Financial Management Regulation does not include a specific definition or description of retrograde for use in the DOD overseas contingency operations budgeting process. For example, although major operations typically involve retrograde, the volume and chapter of the DOD Financial Management Regulation specific to contingency operations does not provide a definition of retrograde or include any information describing how retrograde costs should be considered or calculated. The June 2015 budget guidance from the Office of the Secretary of Defense which, as we previously noted, provides inconsistent information about retrograde within the same document, may not provide clarification for the services to develop consistent, accurate information for budget reporting concerning retrograde. If DOD does not ensure the use of consistent information and descriptions in policy and other departmental documents used to inform budget estimates on retrograde and reset, Congress may not receive the consistent and accurate information that it needs to make informed decisions concerning retrograde and reset. We found that the Marine Corps has published an implementation plan for the retrograde and reset of equipment, but the Army, Navy and Air Force have not. The Army, Navy and Air Force have issued guidance and other documents that address reset but that, taken either collectively or individually, do not include key elements of sound strategic management planning, such as strategies to achieve goals and objectives. According to DOD officials, the military services are responsible for developing implementation plans related to retrograde and reset. As previously described, leading practices in our prior work have shown that sound strategic management planning can enable organizations to identify and achieve strategies to achieve long-term goals and objectives. Some of these elements also generally correspond to several requirements in section 324 of the NDAA for Fiscal Year 2014. For example, an element of sound strategic management planning calls for goals, as well as the strategies and resources needed to achieve goals, among others. Similarly, section 324 of the NDAA for Fiscal Year 2014 requires that DOD’s implementation plan include, among other things, the establishment of priorities and goals, a description of how the plan will be implemented, and an estimate of resources by military service and year required to implement the plan. DOD reports in response to the NDAA requirement describe overall service goals and objectives, among other things, but service-specific implementation plans that incorporate best practices could better position the services to plan, carry out, and track the further implementation of these overarching goals and objectives. The Marine Corps’ implementation plan for the conduct of retrograde and reset of its equipment is contained in two complementary documents: the Operation Enduring Freedom Ground Equipment Reset Strategy (Strategy) and the Ground Equipment Reset Playbook (Playbook). Taken together these two documents present a service-wide plan for the retrograde and reset of Marine Corps’ ground equipment used in overseas contingency operations that largely meet all of the elements of sound strategic planning, as shown in table 1 below. For example, the strategy describes long-term goals to coordinate retrograde and reset efforts, and then to synchronize these efforts with the larger Marine Corps readiness posture. Army officials discussed a variety of documents when we asked for implementation plans for retrograde and reset. However, none of these documents individually or collectively constituted a service-wide implementation plan for retrograde and reset that included relevant key elements for sound strategic management planning. For example, officials provided us with information on an Automated Reset Management Tool, which provides information about web-based logistic components that the Army uses to manage the reset program. While Army officials use this tool to plan, review, analyze, validate, and execute reset, it is not an implementation plan that includes strategies to achieve goals but rather a tool for collaboration. Additionally, officials from the Office of the Deputy Chief of Staff of the Army (G-8) and the Assistant Secretary of the Army for Acquisition, Technology and Logistics cited the Army retrograde and reset handbook as an authoritative source document for retrograde and reset activities. The handbook includes, among other things, information on roles and responsibilities for retrograde and reset for different Army offices, which can be considered components of the strategic management planning element of strategies to achieve goals. However, in a letter introducing the handbook, the handbook is described as a tool and desk reference for retrograde and reset activities. Further, there does not appear to be uniform agreement about the handbook as such because officials at a different Army organization did not refer to this document as an authoritative source. Specifically, officials from the Army G-4 who helped prepare the handbook described it as a lessons-learned document and stated further that the Army has no plans to codify guidance in the handbook. They further stated that the handbook was created in an attempt to organize and clarify previously published information and guidance about Army retrograde and reset activities contained in several different orders. The fact that different Army officials are referring to different documents as implementation plans for retrograde and reset suggests that there is confusion on the strategies for the Army’s activities. As such, this could lead to inconsistent efforts, especially for reset, within the Army. Further, inconsistent descriptions for retrograde and reset activities for the Army could complicate communicating about and budgeting resources for retrograde and reset efforts. Joint Publication 1-02, Department of Defense Dictionary of Military and Associated Terms, generally defines retrograde as the process for the movement of non-unit equipment and materiel from a forward location to a reset program or to another directed area of operations. However, when Army budget officials provided the cost breakdown structure for their budget formulation, the specific code for retrograde includes both personnel and equipment. Also, the Department of Army Financial Management Guidance for Contingency Operations provides a limited discussion of retrograde. As a result, Army officials may not be appropriately planning and funding retrograde activities because of inconsistent descriptions of retrograde. Similarly, officials at different Army organizations do not agree on what is and is not included in reset. For example, Army Forces Command officials stated that any upgrade to equipment is not a reset action, and therefore is not a reset expense, while officials from Army G-4 stated that upgrades of equipment are reset actions, and therefore are reset expenses. With differing information and descriptions in documents, as well as differing perspectives on what is considered and included in retrograde and reset activities within the service, the Army may not be sure of what amount is being expended for the retrograde and reset of equipment. If the Army does not develop an implementation plan that, among other things, articulates goals and strategies for retrograde and reset of equipment, the Army’s retrograde and reset efforts may not align with DOD-wide goals and strategies for retrograde and reset, reset-related maintenance costs may not consistently be included, and resources and funding for retrograde and reset may not be consistently or effectively budgeted or distributed within the service. According to Navy officials, the Navy has not developed guidance and implementation plans for the retrograde and reset of naval equipment because it already has maintenance guidance and retrograde policy for its ground equipment and has established maintenance schedules for its ships and planes. For example, maintenance guidance for ground equipment identified by naval officials includes the Naval Facilities Engineering Command: Management of Civil Engineering Support Equipment (P-300). P-300 includes procedures for administration, operation, and maintenance of automotive, construction, and railroad equipment, which includes maintenance such as repair, modification, as well as guidance for the budgeting of procurement of equipment. Additionally, P-300 includes guidance on when to repair or replace automotive, construction, railway, and transportation equipment. While officials referred to this document as guidance for reset, it does not contain the key elements of an implementation plan for reset, such as strategies and goals for reset. Naval officials also identified a 2009 Navy Expeditionary Combat Command retrograde message that outlines guidance for determining whether equipment should be retained for operations in the U.S. Central Command area of responsibility, or whether it should be disposed of or retrograded for repair. The message also describes ensuring that retrograde activities are conducted with minimum impact and distraction to deployed unit operations and roles and responsibilities for retrograde. However, this document falls short of an implementation plan because it does not include information on actual resources needed or timeline information. According to Navy officials, there is no implementation plan for the retrograde or reset of ships or planes because maintenance is scheduled as a part of their deployment cycle. For example, Navy officials explained that aircraft maintenance is dictated by an integrated maintenance concept particular to each type of aircraft. As described by the Naval Aviation Maintenance Program guidance, an integrated maintenance concept emphasizes a fixed maintenance schedule determined by a Navy analytical maintenance process that includes strategies such as scheduled inspections to determine, among other things, if equipment is in satisfactory condition, and includes scheduled removal of items that will exceed their life limits. Likewise, Navy officials explained that ships are on fixed maintenance schedules, though because of the demands of overseas contingency operations some ship maintenance has been deferred. While these maintenance schedules include information on maintenance goals and strategies for repairs for planes and ships, they do not describe reset specifically even though the Navy draws on reset funding for some repairs. As such, they do not include information on strategic elements of goals, strategies, and resources that would be expected in a comprehensive implementation plan. Without service-wide guidance with goals and strategies defining reset and reset resources, there are inconsistent reset efforts across the Navy. For example, though Navy officials told us that submarines are not reset and therefore should not receive reset funding, the Navy Office of Finance Management and Budget classified submarine propeller maintenance as reset costs. The same Navy office also classified some ship depot maintenance and other equipment and weapons maintenance as reset costs. When asked to reconcile reset funding with the absence of any comprehensive implementation plans for reset, Navy officials pointed us to a set of business rules describing, for example, how and when a ship might be eligible for reset funding, but then emphasized that these rules are not codified in any reset guidance or implementation plan. If the Navy does not develop an implementation plan that, among other things, articulates goals and strategies for retrograde and reset of equipment, the Navy’s retrograde and reset efforts may not align with DOD-wide goals and strategies for retrograde and reset, reset-related maintenance costs may not be consistently included, and resources and funding for retrograde and reset may not be consistently or effectively budgeted or distributed within the service. Air Force officials stated their service does not deploy with large amounts of equipment. According to Air Force officials, the equipment that they deploy with does not need much maintenance after returning from overseas contingency operations, and officials did not identify an implementation plan for retrograde and reset. However, the Air Force has requested funding for reset suggesting that it needs to develop an implementation plan for even the limited amount of reset activities that it conducts. Specifically, the November 2014 report that DOD submitted to the congressional defense committees indicates that the Air Force, like the other services, has used overseas contingency operations funds for equipment reset. An implementation plan could help the Air Force to identify where to place key resources and help to strategically fund reset efforts. When we asked for more information on what is described as reset, Air Force budget officials provided a brief that explained that Air Force reset costs are contained within other budget accounts such as aircraft, ammunition, missile and ground equipment procurement, among others. Some repairs are also classified as reset. For example, the Air Force requests reset funding for Mine Resistant Ambush Protected vehicle depot-level reset. The officials explained that while they request funds for reset from Congress, they do not track the execution of funds for reset maintenance separately, but rather they track the execution of equipment maintenance in general. DOD financial management guidance includes various reset-related cost categories to be used by components to estimate and report contingency operations costs. If the Air Force does not develop an implementation plan that, among other things, articulates goals and strategies for retrograde and reset of equipment, reset-related maintenance costs may not consistently be included and resources and funding for retrograde and reset may not be consistently or effectively budgeted or distributed within the service. Although DOD and the services have identified various guidance and documents to guide their retrograde and reset activities, with the exception of the Marine Corps, no strategic policy or implementation plan has been developed that includes key elements of a strategic management planning framework. As a result, DOD cannot ensure that it is effectively managing its retrograde and reset activities at the department-level nor does it have assurance that there is clear and consistent guidance for three of the services. Furthermore, without consistent and reliable information and terminology in DOD documents, such as guidance, that informs planning and accounting for retrograde and reset funding, Congress may be limited in its ability to provide oversight for actual retrograde and reset costs. Without a comprehensive implementation plan for retrograde and reset, the Army, Navy, and Air Force cannot ensure that their efforts are consistent or comprehensive within and across the services. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to establish a strategic policy that incorporates key elements of leading practices for sound strategic management planning, such as a mission statement and long-term goals, to inform the military services’ plans for retrograde and reset to support overseas contingency operations and to improve DOD’s response to section 324 of the NDAA for Fiscal Year 2014. To enhance the accuracy of budget reporting to Congress, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics, in coordination with the DOD Comptroller, to develop and require the use of consistent information and descriptions of key terms regarding retrograde and reset in relevant policy and other guidance. To improve Army, Navy, and Air Force planning, budgeting, and execution for retrograde and reset efforts, we recommend that the Secretary of Defense direct the Secretaries of the Army, Navy, and Air Force to develop service-specific implementation plans for retrograde and reset that incorporate elements of leading practices for sound strategic management planning, such as strategies that include how a goal will be achieved, how an organization will carry out its mission, and the resources required to meet goals. We provided a draft of this report to DOD for review and comment. In its written comments, which are summarized below and reprinted in appendix III, DOD partially concurred with all three recommendations. DOD also provided technical comments, which we incorporated as appropriate. DOD agreed with the actions within all three recommendations. However, for the first two recommendations, DOD did not agree with identifying the Under Secretary of Defense for Acquisition, Technology and Logistics as the lead for these recommendations. For the third recommendation, DOD also did not agree with directing the Secretaries of the Army, Navy, and Air Force to implement this recommendation. DOD stated that because these policies involve multiple organizations, the department will determine the appropriate Principal Staff Assistant to oversee the implementation of the strategic policy to inform service plans for reconstitution (with personnel, training, and retrograde and reset of equipment as subelements), to lead the development of applicable fiscal terminology, and to lead the development and application of service- related implementation plans. We identified the Under Secretary of Defense for Acquisition, Technology and Logistics and the Secretaries of the Army, Navy, and Air Force to implement our recommendations since these organizations have responsibilities related to developing policies and guidance for reset and retrograde at their respective levels within DOD. However, since these policies involve multiple organizations, we agree with DOD’s approach to determine which appropriate Principal Staff Assistant will help coordinate each effort and we believe that these actions, if fully implemented, would address our recommendations. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Secretaries of the Air Force, Army, and the Navy; and the Chairman of the Joint Chiefs of Staff. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512- 5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To determine the extent to which DOD developed a strategic policy consistent with leading practices on sound strategic management planning for the retrograde and reset of operating forces that support overseas contingency operations, we reviewed the two reports DOD developed and provided to the congressional defense committees in response to the requirements in the National Defense Authorization Act for Fiscal Year 2014, guidance documents related to retrograde and reset, and the three documents DOD identified in its November 2014 report as providing strategic policy and guidance for these efforts. We analyzed these documents and a related fragmentary order to determine if they included the key elements that we identified in prior work to facilitate a strategic management planning framework for retrograde and reset. GAO leading practices identified six key elements that should be incorporated into strategic plans to facilitate a comprehensive, results- oriented framework. We selected these leading practices because they include several elements that are similar to some of the requirements in section 324 of the National Defense Authorization Act for Fiscal Year 2014 for a policy and implementation plan, such as the establishment of goals, objectives, and metrics. According to the leading practices, if they are followed, an agency can develop a results-oriented framework to improve its planning. We determined that GAO’s leading practices were relevant to evaluate DOD’s strategic policy and planning efforts for retrograde and reset. Concerning consistent information for retrograde and reset, we reviewed various department documents: Joint Publication 1-02, the Department of Defense Dictionary of Military and Associated Terms, and DOD Resource Management Decision 700 requiring the Under Secretary of Defense (Comptroller), in coordination with various components, to publish a reset definition for use in the contingency budgeting process. We assessed guidance, such as budget guidance, and other documents that provide information related to retrograde and reset using Standards for Internal Control in the Federal Government specific to information and communications, which state that for an entity to run and control its operations it must have relevant, reliable, and timely communications. We also reviewed DOD Instruction 5025.12 on the Standardization of Military and Associated Terminology, which emphasizes the standardization of military and associated terminology and use of Joint Publication 1-02 by DOD components when preparing policy, strategy, doctrine, and planning documents. In addition, we interviewed DOD officials from the Offices of the Under Secretary of Defense for Personnel and Readiness and the Comptroller about, for example, the reports provided to the congressional defense committees. To determine the extent to which the services developed implementation plans consistent with leading practices on sound strategic management planning for the retrograde and reset of operating forces that support overseas contingency operations, we reviewed the two reports DOD developed and provided to the congressional defense committees in response to the requirements in the National Defense Authorization Act for Fiscal Year 2014. In its November 2014 report, DOD pointed to the specific planning activities undertaken by each service related to retrograde and reset. Additionally, according to DOD officials, the military services are responsible for developing implementation plans related to retrograde and reset. Accordingly, we sought to determine the extent to which each military service has developed a plan to implement the service-specific efforts identified by DOD in the report. We reviewed documents provided by the services to determine if they included the key elements to facilitate a strategic management planning framework for retrograde and reset such as strategies to achieve goals, external factors that could affect goals, the use of metrics to gauge progress, and evaluations of the plan to monitor goals and objectives. GAO previously identified six key elements that should be incorporated into strategic plans to facilitate a comprehensive, results-oriented framework. We selected these leading practices because they include several elements that are similar to some of the requirements in section 324 of the National Defense Authorization Act for Fiscal Year 2014 for an implementation plan, such as the establishment of goals and objectives. According to the leading practices, agencies that follow them can develop a results- oriented framework to improve planning. We determined that these leading practices were relevant to evaluate the services’ planning efforts for retrograde and reset activities. Officials we interviewed provided us with relevant documents that described their services’ guidance, plans, and other information on retrograde and reset activities. In addition, we requested any other documents we saw referenced in the set that the services initially provided, and reviewed these as well. Key retrograde and reset documents we reviewed that were provided by the services as relevant guidance for retrograde and reset include: Army Regulation 750-51: Army Materiel Maintenance Policy; Army Headquarters G-4 Ground Equipment Retrograde and Reset Handbook; Army Pamphlet 710-2-1 Using Unit Supply System (Manual Procedures); Army Regulation 735-5, Property Accountability Policies; and Headquarters, Department of Army and U.S. Army Forces Command Execution Orders on RESET for fiscal years 2009, 2010, and 2012; Army Headquarters Execution Order 083-12 Materiel Retrograde Policies and Procedures in Support of Operation Enduring Freedom; Marine Corps Operation Enduring Freedom Ground Equipment Reset Strategy; and The Reset Playbook; Office of the Chief of Naval Operations Instruction 3000.15A, Optimized Fleet Response Plan; Commander U.S. Fleet Forces Command Instruction 4790.3, Joint Fleet Maintenance Manual Volume 2; Commander Naval Air Forces Instruction 4790.2B, the Naval Aviation Maintenance Program; Naval Facilities Engineering Command, Management of Civil Engineering Support Equipment, and Air Force Instruction 10-401, Air Force Operations Planning and Execution. Additionally, we reviewed funding and cost data for retrograde and reset, and other documents for clarity and context on retrograde and reset procedures for the services including: Army Sustainment Command Materiel Support Branch, Coordinating Instructions for the Unit Equipping and Reuse Conference; a white paper regarding the State of U.S. Army Forces Command Logistics; examples of maintenance and schedule availabilities for different ship classes; Chief of Naval Operations Instruction 3120.47, Surface Ship Engineered Operating Cycle Program; written responses from the Air Force concerning our questions about retrograde and reset policy and other guidance; and Air Force reset briefing slides. We interviewed service officials from several offices, asking them to define and identify retrograde and reset guidance and implementation plans, as well as relevant offices related to these efforts. Also, we asked service officials in these meetings to identify any other offices they believed could be knowledgeable about retrograde and reset and we contacted these offices for interviews as well. Those offices include the following: Department of Army Headquarters Assistant Secretary of the Army for Acquisition, Technology and Logistics; Office of the Deputy Chief of Staff for the Army, Logistics (G-4); Office of the Deputy Chief of Staff (G-8); Army Forces Command; Army Materiel Command; Army Sustainment Command; Marine Corps Systems Command; Marine Corps Logistics Command; Office of the Chief of Naval Operations, N431 Maritime Readiness Branch; Naval Expeditionary Combat Command, N434 Expeditionary Readiness; Program Executive Office Aircraft Carriers; Commander Naval Air Force; Navy Surface Maintenance Engineering Planning Program; and Deputy Assistant Secretary of the Air Force for budget officials from Financial Management and Budget. We conducted this performance audit from February 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides detail regarding the six elements GAO has identified as leading practices for sound strategic management planning to establish a comprehensive, results-oriented framework (see table 2). These leading practices include several elements that are similar to some of the requirements in section 324 of the National Defense Authorization Act for Fiscal Year 2014 for a policy and implementation plan, and may be applicable to help improve DOD’s planning for retrograde and reset. In addition to the contact named above, Guy LoFaro (Assistant Director), Martin H. de Alteriis, Rebecca Guerrero, Richard Powelson, Claudia Rodriguez, Michael Shaughnessy, Yong Song, and Amie Lesser made key contributions to this report.
Following the end of major combat operations in Iraq and Afghanistan, DOD is in the process of resetting equipment and materiel to meet mission requirements. Retrograde refers to the movement of non-unit equipment and materiel from one forward area to another area of operation or to a reset program. Reset includes maintenance and supply activities to restore and enhance combat capability to equipment used in combat. Section 324 of the NDAA for Fiscal Year 2014 included provisions for DOD to establish a policy and implementation plan on retrograde and similar efforts related to forces used to support overseas contingency operations and for GAO to review DOD's policy and plan. This report evaluates the extent to which (1) DOD developed a strategic policy and (2) the services developed implementation plans consistent with leading practices on sound strategic management planning for the retrograde and reset of operating forces. GAO reviewed DOD reports, interviewed officials, and assessed documents against those leading practices, which include elements similar to several of the requirements in section 324. In its response to the requirements of the National Defense Authorization Act (NDAA) for Fiscal Year 2014, instead of developing new policies for retrograde and reset of operating forces used to support overseas contingency operations, the Department of Defense (DOD) relied on three existing guidance documents as its policy for retrograde and reset activities in support of overseas contingency operations. DOD's November 2014 report to congressional committees—issued in response to requirements in the NDAA for Fiscal Year 2014—states that three DOD guidance documents address the department's retrograde and reset efforts: the Quadrennial Defense Review (QDR), Guidance for the Employment of the Force, and the Defense Planning Guidance. DOD officials told GAO that they believe the QDR and other documents provide the policy and guidance needed to inform the department's retrograde and reset efforts. However, GAO found that these documents do not include key elements for sound strategic management planning, such as a mission statement and long-term goals. Without a strategic policy for retrograde and reset that incorporates key elements of sound strategic management planning, DOD cannot ensure that its efforts provide the necessary strategic planning framework to inform the military services' plans for these efforts. Further, DOD emphasizes the use of consistent terms across departmental documents, but GAO found that DOD's guidance is not consistent in identifying what information to use in budget reporting on retrograde and reset activities. If DOD does not ensure the use of consistent information and descriptions in policy and other departmental documents used to inform budget estimates on retrograde and reset costs, Congress may not receive consistent and accurate information to make informed decisions concerning these efforts. GAO found that the Marine Corps has published an implementation plan for the retrograde and reset of operating forces, but the Army, Navy and Air Force have not. In DOD's November 2014 report to congressional committees, DOD pointed to the specific planning activities undertaken by each service related to retrograde and reset. According to DOD officials, the services are responsible for developing their own implementation plans. The Marine Corps has an implementation plan for retrograde and reset, which is contained in two of its guidance documents, and largely meets all the elements of sound strategic management planning, some of which generally correspond to several of the requirements in section 324 of the NDAA for Fiscal Year 2014. However, the Army, Navy and Air Force either have not published implementation plans or have provided GAO with published documents or plans that did not include all elements of leading practices for sound strategic planning—such as strategies on how a goal will be achieved, how an organization will carry out its mission, and resources required to meet goals, among others. Without implementation plans that, among other things, articulate goals and strategies for retrograde and reset of equipment, Army, Navy, and Air Force efforts may not align with DOD-wide goals and strategies for retrograde and reset, reset-related maintenance costs may not be consistently included, and resources and funding for retrograde and reset may not be consistently or effectively budgeted or distributed within the services. GAO recommends that DOD establish a strategic policy that includes key elements of leading practices; use consistent information and descriptions for budget reporting; and that the Army, Navy and Air Force develop implementation plans for their retrograde and reset efforts. DOD generally concurred with all three recommendations.
According to Presidential Decision Directives, the Director of Central Intelligence, the President’s Office of National Drug Control Policy (ONDCP), and others, illegal drug-trafficking is a threat to U.S. national security. The National Drug Control Strategy, issued annually by ONDCP, identifies the reduction of illegal drug use as its overall goal. The strategy links the success of this effort, in part, to the U.S. counterdrug intelligence program and makes intelligence one of the strategy’s key drug functions. As part of the national strategy, ONDCP established five goals to reduce illegal drug use: 1. Educate and enable America’s youth to reject illegal drugs as well as alcohol and tobacco. 2. Increase the safety of America’s citizens by substantially reducing drug-related crime and violence. 3. Reduce health and social costs to the public of illegal drug use. 4. Shield America’s air, land, and sea frontiers from the drug threat. 5. Break foreign and domestic drug sources of supply. Counterdrug intelligence plays an important role in the execution of the strategy and supports three of the strategy’s five goals—goals 2, 4, and 5. Appendix II identifies which goal each counterdrug intelligence organization supports. In drug-trafficking investigations, interdiction activities, and efforts to dismantle major drug-trafficking organizations, federal, state, and local law enforcement agencies need intelligence to understand and effectively combat the illegal drug trade. Intelligence can be used, for example, to learn about the structure, membership, finances, communications, and activities of drug-trafficking organizations as well as specific operational details of particular illegal drug-smuggling or money-laundering activities. Numerous federal, state, and local organizations collect and/or produce counterdrug intelligence. Their specific activities depend on the nature of the drug-monitoring, law enforcement, intelligence, or other counterdrug mission assigned to them. This report focuses on the federal or federally funded organizations we identified as having a principal role in collecting and/or producing counterdrug intelligence. We do, however, briefly discuss the roles of other federal organizations that contribute counterdrug intelligence information as by-products of their missions. Organizations such as the National Security Council and the Department of Justice’s Organized Crime and Drug Enforcement Task Forces Program, which are primarily consumers of counterdrug intelligence information, were excluded from this report. Drug-related intelligence information is gathered by two principal communities: the national foreign intelligence (including the Department of Defense (DOD)) and federal law enforcement communities. Within these 2 communities, we identified at least 22 federal or federally funded organizations spread across 5 cabinet-level departments (Justice, Treasury, Transportation, Defense, and State) and 2 cabinet-level organizations (ONDCP and the Director of Central Intelligence) whose roles include collecting and/or producing counterdrug intelligence information. The organizations having a principal role in collecting and/or producing counterdrug intelligence are as follows: Drug Enforcement Administration (DEA) El Paso Intelligence Center (EPIC) Federal Bureau of Investigation (FBI) National Drug Intelligence Center (NDIC) Justice-funded Regional Information Sharing System (RISS) Program Department of the Treasury U.S. Customs Service Customs’ Domestic Air Interdiction Coordination Center (DAICC) Financial Crimes Enforcement Network (FinCEN) Joint Interagency Task Force-East (JIATF-East) Joint Interagency Task Force-West (JIATF-West) Joint Interagency Task Force-South (JIATF-South) Joint Task Force Six (JTF-6) Defense Intelligence Agency (DIA) National Security Agency (NSA) National Imagery and Mapping Agency (NIMA) Office of Naval Intelligence (ONI) Tactical Analysis Teams (TAT) Bureau of Intelligence and Research (INR) Executive Office of the President/ONDCP ONDCP-funded High Intensity Drug-Trafficking Areas (HIDTA) Program Director of Central Intelligence Central Intelligence Agency (CIA) Crime and Narcotics Center (CNC) Figure 1 provides a graphic representation of federal organizations with a principal role in counterdrug intelligence. A number of federal organizations provide counterdrug intelligence as by-products of their principal missions. For example, the Border Patrol’s principal mission is to prevent the illegal entry of aliens between U.S. ports of entry. In the course of this effort, the Border Patrol arrests illegal border crossers who may also be transporting drugs. While the quantity of drugs the Border Patrol seizes each year is significant, those seizures result from the Border Patrol’s principal mission to interdict illegal aliens—not drugs. Information about Border Patrol arrests and drug seizures is used to produce tactical and operational intelligence for the Border Patrol’s own use and dissemination to other federal organizations, such as DEA, EPIC, and the Customs Service, for investigative and intelligence purposes. Other federal organizations that contribute counterdrug intelligence as by-products of their principal missions include the Bureau of Alcohol, Tobacco, and Firearms; Federal Aviation Administration; and the Bureau of Land Management. The role of counterdrug intelligence is to support both the individual organizations’ drug-monitoring, law enforcement, intelligence, or other counterdrug mission in support of the national drug control strategy. According to the strategy, the goal of the U.S. supply reduction effort is to reduce the supply of available drugs in the belief that the less available drugs are, the fewer people will use them. To help reduce the supply of drugs, efforts must be focused on acquiring intelligence concerning every link in the drug chain—from drug cultivation to production and trafficking, both domestically and abroad. Many of the organizations we identified as having a principal role in counterdrug intelligence are intelligence collectors who operate either domestically or abroad or, in the case of some organizations, both. Counterdrug intelligence organizations employ human, electronic, photographic, and/or other technical means to collect intelligence information. Under existing federal statutes and executive orders, U.S. counterdrug intelligence organizations collectively are authorized to gather information regarding suspected illegal drug activities of (1) U.S. and foreign persons and organizations within the United States (domestic intelligence) and (2) foreign powers, organizations, or persons outside the United States (foreign intelligence). Generally, law enforcement organizations such as the DEA and FBI collect both domestic and foreign counterdrug intelligence information, whereas national foreign intelligence organizations such as the CIA, NSA, and DIA collect only foreign intelligence information. Executive Order 12333, United States Intelligence Activities, places limits on agencies such as the CIA, NSA, and DIA from collecting information concerning the domestic activities of U.S. persons. Table 1 depicts the domestic and foreign collection counterdrug intelligence organizations are authorized to conduct. The table does not include those organizations that do not collect counterdrug intelligence. There are three basic means counterdrug intelligence organizations use to collect intelligence information: Human includes confidential informants, cooperating witnesses, and other human sources. Electronic and/or signals includes intercepted conversations through telephone, radio, or other communication devices and noncommunication electronic emissions produced by devices such as radars. Photographic or imagery includes imagery of objects or persons using devices as simple as a hand-held camera to more sophisticated imaging systems. The counterdrug intelligence organizations identified in table 1 use each of these basic intelligence collection means. Most of the organizations we identified as having a principal role in counterdrug intelligence are producers rather than collectors of information. For example, NDIC, EPIC, and the JIATFs rely on information collected by other organizations to produce their intelligence products and are not collectors themselves. Organizations produce strategic, operational, and/or tactical intelligence. There is no universally agreed upon definition among counterdrug intelligence organizations as to what constitutes strategic, operational/investigative, and tactical intelligence. For the purposes of this report, the following definitions and examples characterize the various types: Strategic intelligence is information concerning broad drug-trafficking patterns and trends that can be used by U.S. policymakers, including department and agency heads, for strategic planning and programming purposes. Examples include information on coca leaf crop estimates; foreign country or regional drug threat assessments; broad patterns or trends in cocaine, heroin, methamphetamine, or other drug production and trafficking; and information on the financial, transportation, structure, and other workings of major drug-trafficking organizations and their leadership hierarchy. Operational/investigative intelligence is information that can be used to provide analytic support to an ongoing criminal investigation or prosecution or can be useful in resource planning, such as where to station agents to obtain evidence over a period of time. Examples include information about specific persons, organizations, facilities, or routes being investigated and/or prosecuted for illegal drug-trafficking. Tactical intelligence is information that is of immediate use in supporting an ongoing drug investigation; positioning federal assets to monitor the activities of suspected drug traffickers; or positioning federal, state, or local law enforcement assets to interdict, seize, and/or apprehend a vehicle or other conveyance and/or person suspected of trafficking in drugs. This can include information such as the current or imminent location or mode of transportation of specific drug shipments. Table 2 identifies the types of intelligence produced by counterdrug intelligence organizations. The ONDCP Director of Intelligence said that how consumers use the intelligence ultimately determines whether the intelligence is strategic, tactical, or operational. The products that the organizations included in our review produce include the following. Reports that consolidate all known information about a specific topic in order to describe the current drug situation and, in some cases, to indicate what may occur in the future. Examples include annual assessments of the drug threat in HIDTA geographic areas and the impact of the threat in other areas, EPIC’s threat assessments on the Southwest border, EPIC domestic and worldwide threat briefs, and DEA’s assessment of the South American heroin threat. Reports that provide information on specific drug-trafficking organizations or individual traffickers. Organizational profiles, produced by DEA and FBI, contain information about a drug-trafficking organization’s members, associates, business interests, operations, finances, and communications. Other examples of organizational profiles include NDIC profiles of the Amezcua-Contreras Organization and Vietnamese drug-trafficking organizations. An individual’s profile would include, among other things, information on a known or suspected drug trafficker’s citizenship, aliases, residence, employment, arrest record, vehicles owned, and other assets. Reports that provide information on a specific country’s drug-related activities such as coca cultivation and processing of drugs, drug abuse and treatment, drug law enforcement agencies, and treaties and conventions with the United States. Examples of country-specific studies include DEA’s studies of Mexico, El Salvador, and Bolivia and FinCEN’s advisory regarding the Colombian black market peso exchange. Reports that provide information on a specific condition over a period of time and analyze any patterns or changes in patterns that occur. While this type of report may be included as part of other products, such as threat assessments, some organizations issue separate reports on trends. Examples of trend publications are the interagency assessment of cocaine movement, FinCEN’s analysis of trends in currency flows to and from Federal Reserve banks to identify money-laundering activities, DAICC’s reports on air-smuggling patterns along the Southwest border, Customs Service reports of drug seizure activities, and EPIC’s special report on highway interdiction. Reports that are produced and used by organizations such as DEA, Customs, and FBI to further their investigations. Examples of investigative intelligence reports are link analyses, telephone toll analyses, and post-drug seizure analyses. A link analysis uses intelligence or information such as telephone numbers or financial transactions, acquired during an investigation or multiple investigations, to identify people, businesses, or organizations involved in drug activities and their associations. A telephone toll analysis uses telephone numbers acquired from investigative targets to identify additional drug-trafficking suspects. A post-drug seizure analysis uses items such as drugs, equipment, vehicles, shipment containers, documents, or computer records seized from suspects during search warrants and seizures to identify new drug-smuggling trends or methods used by drug traffickers. Actionable intelligence reports that facilitate U.S. law enforcement efforts to seize drugs and apprehend suspected drug traffickers. These reports include EPIC, RISS, and HIDTA-funded watch centers’ responses to queries received from federal, state, and local law enforcement for information such as database extracts on a specific subject or vehicle; JIATF and Customs lookout lists, special alerts for persons, vessels, and aircraft in their areas of responsibility; and DAICC reports on suspect aircraft making drug airdrops to vessels or vehicles within the U.S. border. Include NDIC’s National Street Gang Survey; officer safety bulletins; EPIC’s reference document on concealment methods in land vehicles; and FBI and RISS charts, exhibits, and analyses prepared for presentation at trials or grand juries. For more detailed information on individual organization counterdrug intelligence products, see appendix I. The amount of federal funds spent by counterdrug intelligence organizations is difficult to determine because (1) there is no government-wide, consolidated counterdrug intelligence budget or single government source from which to determine how much organizations budget and spend on counterdrug intelligence; (2) in all but a few cases, organizations do not maintain separate budgets to account for counterdrug intelligence funding; and (3) some organizations functions serve more than one purpose, and it is therefore difficult to directly allocate costs to a particular activity such as counterdrug intelligence (e.g., law enforcement officers collect intelligence as part of their investigative activities). ONDCP’s 1998 National Drug Control Strategy Budget Summary showed that $154.2 million was spent for counterdrug intelligence in fiscal year 1997. This total did not, however, include all spending for counterdrug intelligence for fiscal year 1997. It included money spent for counterdrug intelligence by some organizations but did not include money spent by organizations such as the Defense Department, Coast Guard, and Customs Service. According to ONDCP officials, counterdrug intelligence spending attributable to these organizations is included under the interdiction and/or investigation drug functions in the ONDCP budget summary, but is not specifically identified as intelligence. Organizations with a principal role in collecting and/or producing counterdrug intelligence provided us with unclassified information showing that at least $295 million was spent for counterdrug intelligence in fiscal year 1997. Figure 2 shows, by cabinet-level organization, the amount and percentage of spending agencies reported to us for counterdrug intelligence programs and activities for fiscal year 1997. The Justice, Defense, and Treasury Departments account for over 90 percent of the total estimated spending for counterdrug intelligence activities in fiscal year 1997. For individual organization funding/spending amounts for fiscal years 1995 through 1998, and, where applicable, a discussion of the methodology some organizations used to estimate counterdrug intelligence funding/spending, see appendix I. The number of federal personnel assigned to counterdrug intelligence collection and/or production activities is difficult to determine because in many cases organizations are not authorized and/or do not track personnel positions specifically for counterdrug intelligence activities, yet have personnel who perform counterdrug intelligence functions; there is no single governmentwide source from which to obtain information on the number of counterdrug intelligence personnel (e.g., ONDCP’s 1997 National Drug Control Strategy Budget Summary reported estimates of total counterdrug personnel resources—i.e., full-time equivalents—but did not further delineate personnel by intelligence or other key ONDCP drug function); some personnel perform functions that serve more than one purpose, and it was difficult for organization officials to determine the amount of time these people spent specifically on counterdrug intelligence duties as opposed to other duties (e.g., law enforcement officers collect intelligence as part of their investigative activities); and some intelligence personnel perform functions in support of multiple missions, and it was difficult to determine how much time they spent on counterdrug intelligence activities (e.g., Customs intelligence research specialists support Customs’ trade, fraud, and export control missions in addition to its counterdrug mission). The organizations with a principal role in collecting and/or producing counterdrug intelligence provided us with unclassified information on the number of personnel assigned to counterdrug intelligence programs and activities. As of October 1, 1997, over 1,400 personnel collected and/or produced counterdrug intelligence. Figure 3 shows, by cabinet-level organization, the estimated number and percentage of counterdrug intelligence personnel that organizations reported to us as supporting counterdrug intelligence programs and activities as of October 1, 1997. Justice Department (744) Defense Department (299) Represents 92 percent of total personnel. The Justice, Defense, and Treasury Departments account for over 90 percent of the total estimated unclassified number of federal personnel who support counterdrug intelligence programs and activities. For more detailed information regarding each organization’s counterdrug intelligence personnel and, where applicable, a discussion of the methodology organizations used to estimate the number of counterdrug intelligence personnel assigned to their organization, see appendix I. We provided the Office of National Drug Control Policy; Departments of Defense, Justice, State, Transportation and Treasury; and the Director of Central Intelligence a copy of the draft report for their review and comment. The Departments of State and Defense and the Director of Central Intelligence responded with oral comments and the Office of National Drug Control Policy and the Departments of Justice, Transportation, and Treasury provided written comments. These agencies agreed with how the draft report portrayed their respective agencies’ counterdrug intelligence role, mission, funding, and personnel-related information. Each of the agencies provided technical comments to clarify the scope of their activities in support of U.S. counterdrug intelligence programs. Their technical comments were incorporated where appropriate. We identified those federal organizations that collect and/or produce counterdrug intelligence information by reviewing documents, including our prior reports and other studies, and interviewing officials at the Departments of Justice, Treasury, Transportation, State, and Defense; the Office of National Drug Control Policy; and other federal organizations. We focused our efforts on those organizations that have counterdrug intelligence as a principal role. For organizations that produce counterdrug intelligence as a by-product of their principal mission, such as the Border Patrol and Federal Aviation Administration, we reviewed documents and, in some cases, interviewed agency officials. For the organizations included in our review, we sought descriptive data and other information from the organizations. Initially, we provided each agency with a data collection instrument seeking information on, among other things, their intelligence roles, mission, funding, personnel, and products. Later, we interviewed officials at headquarters and selected field offices of various U.S. law enforcement, Department of Defense, and other agencies to discuss their respective counterdrug intelligence programs, observe counterdrug activities and/or demonstrations of counterdrug intelligence applications and databases, and collect relevant documentation about their programs. Appendix III lists the locations we visited. We compiled information concerning the organizations’ overall and specific counterdrug intelligence mission, statutory authority, budgets, personnel figures, and operational data. In some cases, we obtained classified information, which is not included in this report. We did not independently verify organizations’ statistical or budgetary data; however, whenever possible, we compared their data to that provided to ONDCP and contacted agencies to clarify budget or personnel data where the information appeared to be inconsistent. We also requested a briefing on the activities of the Director of Central Intelligence’s Crime and CNC and information on CNC’s mission, personnel, structure, and funding. CIA’s Director of Congressional Affairs informed us that the agency would be unable to provide us with the requested information. In a subsequent meeting with staff from your Subcommittee and the House Permanent Select Committee on Intelligence, we were directed to withdraw our request for personnel and funding information and limit our request to information on CNC’s roles, missions, and functions. CNC subsequently provided us a written statement of its roles, missions, and functions. We performed our review from August 1997 through March 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees; the U.S. Attorney General; the Secretaries of Treasury, Transportation, Defense, and State; and the Directors of ONDCP and Central Intelligence. We will make copies available to other interested parties upon request. Please contact me at (202) 512-3504 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. The Drug Enforcement Administration’s (DEA) mission is to enforce the U.S. controlled substances laws and regulations and to bring to the U.S. or any other competent criminal and civil justice system the organizations and their principal members involved in growing, manufacturing, or distributing controlled substances in or destined for illicit traffic in the United States. In addition, DEA recommends and supports nonenforcement programs aimed at reducing the availability of illicit controlled substances on domestic and international markets. DEA’s Intelligence Program supports its counterdrug mission and specific objectives. DEA conducts its counterdrug activities under the authority of Reorganization Plan No. 2 of 1973 and Executive Order 11727 (1973). Since its establishment in 1973, DEA, in coordination with other federal, state, local, and foreign law enforcement organizations, has been responsible for the collection, analysis, and dissemination of drug-related intelligence. DEA’s intelligence efforts are overseen by its Intelligence Division. Specific units within the Intelligence Division are responsible for providing intelligence support or assessments internally on, among other things, types of illegal drugs, for example, cocaine or heroin; types of illegal activity, for example, diversion of legitimate drugs and precursor chemicals; level of illegal activity, for example, street or wholesale distributors; or major trafficking organizations or sensitive investigations. Intelligence units are located in 21 domestic field divisions and in major drug cultivation, production, and transit countries around the world. DEA also manages the El Paso Intelligence Center (EPIC), a multiagency tactical drug intelligence center. In some of its overseas offices, DEA provides guidance to the U.S. embassies’ Tactical Analysis Teams (TAT)—Department of Defense (DOD) resources used for information and intelligence gathering. The focus of DEA’s Intelligence Program is to provide analytical support to investigations, assist in identifying and profiling drug-trafficking organizations and methods, support efforts to target and arrest the highest levels of traffickers within those organizations, identify trafficker assets, and further criminal prosecutions of drug traffickers. DEA manages a national narcotics intelligence system to collect and analyze tactical, operational, and strategic drug intelligence and disseminate this intelligence to all federal, state, local, and foreign antidrug agencies with a need to know. DEA collects tactical intelligence for the immediate interdiction of drug shipments; operational intelligence on organizations and individuals involved in drug-smuggling; and strategic intelligence on trafficking patterns, drug cultivation, emerging trends, and the price and purity of illicit drugs and similar data used for decision-making. Based on information obtained through investigations, as well as from other sources, DEA intelligence elements produce strategic and trend assessments to assist DEA field managers and executives in the deployment of resources. National level policymakers use these assessments in developing drug control policies and strategies. DEA collects imagery and human intelligence and conducts court-authorized electronic surveillance of individuals and organizations. DEA intelligence analysts develop a variety of products, which include, in part, strategic trend and analysis publications, strategic organizational profiles, country-specific studies, reports on emerging trends, and enforcement support products such as investigative reports, document analyses, telephone toll analyses, link analyses, and investigative file reviews. Among the 1997 unclassified intelligence products DEA published were The South American Heroin Threat; Heroin: An Assessment; Changing Patterns in Nigerian Heroin Trafficking; Operation BREAKTHROUGH: Coca Cultivation & Cocaine Base Production in Peru; Europe—Ecstacy Production, Trafficking and Abuse; Money Laundering in Costa Rica; The Marijuana Trade in Colombia; Cocaine Hydrochloride Production in Bolivia; Changing Dynamics of the U.S. Cocaine Trade; and various country-specific bulletins, including ones on El Salvador, Mexico, Bolivia, Costa Rica, and Panama. As of October 1, 1997, DEA’s Intelligence Program was authorized 442 intelligence research specialist positions (including those in EPIC) and approximately 300 additional special agents and support personnel, representing about 10 percent of DEA’s workforce. Of the 442 intelligence research specialists authorized, 422 were on board. The El Paso Intelligence Center’s counterdrug mission is to support the field in disrupting the flow of illicit drugs at the highest trafficking level through the exchange of time-sensitive, tactical intelligence dealing principally with drug movement. EPIC provides operational assessments of drug-trafficking organizations and strategic assessments on drug movement and concealment techniques. EPIC also analyzes and disseminates information regarding drug-related currency movement. EPIC conducts its counterdrug activities under the authority of a 1974 agreement between DEA and the Immigration and Naturalization Service. Managed by DEA, EPIC is a 24-hour-a-day, 7-day-a-week multiagency clearinghouse for tactical intelligence and the collection, analysis, and dissemination of information related to worldwide drug movement. Although EPIC has no direct collection management authority, essential elements of intelligence are identified and provided to participating agencies, as appropriate, for collection. EPIC has a primary role to provide intelligence and law enforcement information in support of interdiction and investigative efforts against the movement of illegal drugs toward U.S. borders; over maritime and air approaches; along the nation’s interstate and state highway systems; and through its airports, bus terminals, railway stations, and commercial courier systems. EPIC responds to requests for tactical support from federal and state enforcement agencies on specific cases, links ongoing investigations, and also proactively provides enforcement-support information to preempt trafficking activities and to support interdiction. EPIC has placed special emphasis on supporting counterdrug efforts along the U.S./Mexico border by performing research and analysis of information to develop an understanding of drug movement through Mexico and across the Southwest border, and to identify the major trafficking organizations responsible for that drug movement. EPIC also emphasizes the transportation organizations approaching the United States through the Caribbean corridor. As of October 1, 1997, the Watch Operations Section, Tactical Operations Section, and the Research and Analysis Section formed the principal EPIC units responsible for intelligence collection and analysis. An Information Management Section is responsible for the operation and management of EPIC’s communications and information systems containing multiagency data. The Watch Operations Section is staffed 24 hours a day, 7 days a week. Separate air and maritime units are responsible for the coordination of air- and maritime-related requests received by EPIC. The General Watch has the primary responsibility for receiving requests for criminal histories and border-crossing and other information from federal, state, and local law enforcement agencies, conducting checks of various databases available at EPIC, and providing EPIC’s response to the requesting agencies. The General Watch is also responsible for sending “EPIC lookouts” (lookouts for persons, vehicles, or aircraft) and other alerts when requested by law enforcement agencies. The State and Local Liaison Unit, within the Watch Operations Section, is responsible for facilitating Highway Interdiction (Operation Pipeline/Convoy) training for state and local law enforcement officers throughout North America. The Tactical Operations Section is responsible for the operation and management of the Joint Information Coordination Centers (a joint EPIC, DEA, and Department of State program to assist governments in establishing an EPIC-type center in the host countries); the operation and maintenance of specialized communication collection programs; the Special Operations Unit; and the Operational Intelligence Unit. The Special Operations Unit is a 365-day watch operation that facilitates the timely flow of tactical intelligence and information worldwide. According to EPIC, among other things, this unit has the capability to access sensitive foreign-source information related to ongoing operations; monitor and coordinate undercover operations and controlled drug deliveries; and assist the interagency interdiction centers in the coordination of information and the handoff of suspect targets to law enforcement agencies. The Operational Intelligence Unit researches, analyzes, and fuses federal, state, local, and national level intelligence on the major smuggling organizations operating along the Southwest border and uses this information to produce intelligence profiles on these organizations and other investigative intelligence for use by law enforcement agencies and policymakers. The Research and Analysis Section concentrates on all-source fusion of drug-smuggling information to provide smuggling assessments relative to trafficking groups, geographic areas, modes/methods of transportation/concealment, as well as trend, pattern, and statistical analysis. The Section is divided into the Domestic, Foreign, Southwest Border, and Trend Analysis Units, which capture the information and conduct the fusion, analysis, and dissemination of intelligence products relating to drug movement. For example, the Southwest Border Unit collects, fuses, and analyzes data and intelligence information collected by law enforcement along the border to produce trend and other assessments of drug-trafficking along the Southwest border. The Trend Analysis Unit develops and disseminates intelligence products on drug movement events, and changes in drug movement trends, and identifies the primary threat areas for drugs entering the United States. The Domestic Unit supports programs involved in drug and drug-related currency interdiction within the United States. The Foreign Unit concentrates primarily on drug movement to South Florida and the East Coast through the Caribbean corridor and support to interdiction programs. EPIC produces daily, weekly, monthly, quarterly, and annual reports such as the Southwest border daily, quarterly, and annual reports, the Weekly Activity Brief, the Highway Drug Interdiction Weekly Activity Report, the monthly maritime and general aviation activity reports, and the EPIC worldwide and domestic quarterly threat briefs. EPIC also produces and disseminates special publications such as reference documents on concealment methods in land vehicles, on motorcycle gangs, and fishing vessel and aircraft identification guides. As of October 1, 1997, EPIC had over 314 positions. These positions included contract personnel, staff from its federal member agencies, and two state agencies. Of the 314 positions, 69 were intelligence research specialists, of which 41 were DEA employees. Of the 69 intelligence research positions available, 58 were on board, of which 32 were DEA specialists. Funding amounts for EPIC are included in funding totals in DEA’s profile. The Federal Bureau of Investigation (FBI) is charged with investigating all violations of federal laws, except those that have been assigned by legislation to other agencies. The FBI also has concurrent jurisdiction with the Drug Enforcement Administration for enforcement of federal criminal drug laws. The FBI’s Criminal Investigative Division is responsible for overseeing FBI counterdrug efforts. Its Criminal Intelligence Program’s counterdrug mission is to identify existing and emerging criminal organizations involved in narcotics trafficking. The FBI’s field office criminal intelligence squads’ counterdrug mission is to provide timely, useful, and accurate strategic, operational, and tactical organizational intelligence in an effort to facilitate the disruption and dismantling of major criminal organizations impacting field office and regional territories. The FBI conducts its counterdrug activities under 21 U.S.C. 871, 876, 881, and 1504; 18 U.S.C. 1961 and 3052; and 28 U.S.C. 533-535. The FBI primarily collects intelligence information from human sources to help disrupt and dismantle drug organizations and prosecute their leaders. Its human intelligence sources include informants, cooperating witnesses, and information obtained through undercover operations. Some of these sources may reside in other countries. The Bureau also conducts court-authorized electronic surveillance to penetrate sophisticated drug-trafficking organizations and frequently obtains photographic intelligence to help it plan enforcement actions. As of October 1, 1997, the FBI had criminal intelligence squads in 13 of its 56 field offices and supported joint counterdrug intelligence efforts, such as the Southwest Border Project (a DEA, FBI, and Customs Service effort to identify, in part, the command and control structures of major Mexican drug-trafficking organizations) and the Dominant Chronicle Project (an FBI/Defense Intelligence Agency led foreign document/information exploitation initiative that supports the U.S. counterdrug efforts via multiagency sharing of comprehensive analytical reports derived from documents seized by foreign law enforcement agencies). The types of intelligence products produced by FBI intelligence squads include threat assessments, organization intelligence profiles (includes information on, among other things, an organization’s members, associates, businesses, operations, finances, and communications); telephone toll analyses; and post-seizure analyses of documents or other evidence seized during an investigation. As of October 1, 1997, the Criminal Investigative Division’s Intelligence Section had a total of 95 authorized positions, of which 88 personnel were on board. Forty eight of the 95 authorized positions were intelligence research specialists, of which 36 were on board. The field divisions had on-board 353 intelligence research specialists, including 159 who were in the criminal intelligence squads. In addition, the Southwest Border Project and Interagency Dominant Chronicle had a total of 12 intelligence research specialists. No estimate was provided for 1995. NDIC’s mission is to coordinate and consolidate strategic organizational drug intelligence from national security and law enforcement agencies, in order to produce requested assessments and analyses regarding the structure, membership, finances, communication, transportation, logistics, and other activities of drug-trafficking organizations. The scope of its mission is worldwide drug-trafficking, with emphasis on the domestic arena. Furthermore, NDIC is to provide strategic counterdrug threat assessments for the nation’s policymakers and executives, including the Attorney General, the Director, Office of National Drug Control Policy, and the Director of Central Intelligence. NDIC conducts its counterdrug activities under the statutory authority of the Department of Defense Appropriations Act for fiscal year 1993, Public Law 102-396, section 9078, 106 Stat. 1876, 1919 (1992). Also, Attorney General Order No. 2059-96, dated October 29, 1996, which formalized NDIC’s organization within the Department of Justice and conveyed its charter. NDIC provides strategic counterdrug intelligence support to the federal law enforcement and intelligence community agencies tasked with counterdrug missions. NDIC’s primary role is to provide these agencies with assessments and analyses of drug-trafficking organizations, emerging trends and patterns, and the national threat posed by the drug trade. In addition, NDIC disseminates nonoperational information on the drug trade to nonfederal law enforcement agencies. NDIC’s, Intelligence Division is comprised, in part, of three Strategic Organizational Intelligence Branches—the primary analytical units—which are organized by geographic and subject matter areas. The Strategic Organizational Intelligence Branches analyze multisource drug intelligence to provide a strategic overview of the trafficking organizations that impact on the United States, a specific region or a metropolitan area under study. On a smaller scale, the Intelligence Division also contains a Document Exploitation Branch comprised of two teams which are deployed when requested by federal law enforcement agencies to review documentary and computerized evidence obtained from significant drug-trafficking organizations. When deployed, team members collect and analyze seized documents and transfer this information electronically to NDIC for analysis to develop new investigative leads. In addition, information discovered through document exploitation is provided to the requesting field agents to support their investigations and, if permitted, is placed in NDIC’s main data system for subsequent strategic analyses. NDIC’s Intelligence Division produces all-source strategic assessments including: (1) baseline assessments—broad-ranging studies that focus on a specific drug issue, trend, or subject. Examples of unclassified baseline assessments include Effects of D-Methamphetamine, The Dominican Threat: A Strategic Assessment of Dominican Drug-Trafficking, Mexican Methamphetamine Organizational Trends and Patterns, and Domestic Cannabis: Indoor Cultivation Operations. (2) strategic organizational drug intelligence—all-source assessments of specific drug trafficking organizations, unclassified examples of this type of product include Mexican Methamphetamine Organizational Profile of the Amezcua-Contreras Organization and Vietnamese Drug-Trafficking Organizations. (3) special projects—strategic assessments that do not fit the criteria above, for example National Street Gang Survey Report, Colombian Money Laundering Volume Three, and Clandestine Laboratory Operators. As of October 1, 1997, NDIC was authorized 257 positions, of which 221 personnel were on-board. Of those on-board, 77 were detailed, on a reimbursable basis, from 11 participating agencies. Of NDIC’s total personnel, the Director of NDIC had designated 117 positions for intelligence analysts, of which 97 analysts were on-board. The mission of the Justice-funded Regional Information Sharing Systems (RISS) Program is to enhance the ability of state and local criminal justice agencies to identify, target, and remove criminal conspiracies and activities spanning jurisdictional boundaries. The Justice Department’s Bureau of Justice Assistance conducts the RISS Program under the statutory authority of 42 U.S.C. 3796 h. The RISS Program is a federally funded effort representing the only nationwide, multijurisdictional criminal intelligence sharing system operated by and for state and local law enforcement agencies. Through the Bureau of Justice Assistance, the Justice Department provides grants to six regionally based RISS projects: the Mid-States Organized Crime Information Center, the Middle Atlantic-Great Lakes Organized Crime Law Enforcement Network, the New England State Police Information Network, the Regional Organized Crime Information Center, Rocky Mountain Information Network, and the Western States Information Network. Each RISS project comprises about 350 to 1,100 federal, state, and local criminal justice and regulatory agencies within the project’s mutually exclusive service area. About 75 percent of the total RISS members nationwide are from local law enforcement agencies. Federal agencies that participate include FBI, DEA, and Customs Service field offices and NDIC. Collectively, the RISS program operates in 50 states, the District of Columbia, and Canada. Although each project sets its own priorities, five of the six RISS projects focus heavily on narcotics-related crime and one, the Western States Information Network, focuses exclusively on narcotics. Each project is required to provide information sharing, analysis, and communications services to support investigative and prosecutorial efforts of member agencies addressing multijurisdictional offenses and conspiracies. Each project has an information sharing center and analytical unit that form the principal sections responsible for counterdrug intelligence production and analyses. Each center maintains an automated pointer index system to assist member agencies in sharing information regarding known or suspected criminals or criminal organizations. All six centers are electronically connected through the RISSNET wide area network to allow for nationwide information sharing. Each center responds to requests for information from its member agencies, such as running queries of various law enforcement databases to determine if other law enforcement agencies are investigating a particular suspect or organization. In addition, each project has an analytical unit that provides services to member agencies to support particular cases, such as telephone toll analyses or financial analyses, as well as broader analyses of criminal activities/groups and investigative trends. The type, number, and subject matter of counterdrug intelligence products provided by RISS projects to member agencies vary by location based on the size and resources of the project and the priorities set by its policy board. Examples of RISS products are responses to requests for information, such as database extracts on a single subject (i.e., individual or organization) from law enforcement and other databases; telephone toll, link, and event flow analyses to support particular investigations; organizational or individual profiles that provide information and analyses of organizations or individual traffickers operating in the project’s service area; investigative trend reports; analyses for charts and exhibits for trials or grand juries; and monthly/quarterly bulletins regarding enforcement issues, including counterdrug intelligence issues. Federal counterdrug intelligence personnel that participate in the program through their federal agency field office’s membership in the RISS projects are included in their respective parent agency’s personnel estimates. Among its many missions to prevent contraband from entering or exiting the United States, the Customs Service is the lead agency for interdicting drugs being smuggled into the United States and its territories by land, sea, or air. The Customs Service’s primary counterdrug intelligence mission is to support Customs’ drug enforcement elements (i.e., inspectors and investigators) in their interdiction and investigation efforts. The Customs Service conducts its counterdrug activities under the Tariff Act of 1930, as amended, and certain provisions of titles 18 and 19 of the U.S. Code, including 18 U.S.C. 545 and 19 U.S.C. 1401, 1497, 1584, 1589a, 1590, and 1595a. In addition, by Memorandum of Understanding dated August 8, 1994, between DEA and the Customs Service, cross-designated special agents of Customs can conduct certain investigations under 21 U.S.C. 801-904 and 21 U.S.C. 951-971. Customs’ Intelligence and Communications Division at headquarters has primary responsibility for intelligence analysis, operation, and Intelligence Community liaison for Customs, including functional oversight over all Customs intelligence assets at headquarters and in the field. The division’s counterdrug role is to produce tactical, operational, and strategic intelligence regarding drug-smuggling individuals, organizations, transportation networks, and patterns and trends. The division provides its products to Customs’ enforcement elements and executive management as well as to other agencies with drug enforcement or intelligence responsibilities. The division’s major operational subdivisions are the Intelligence Analysis Branch, Operations Branch, and Program and Policy Evaluation Branch. The Intelligence Analysis Branch is responsible for monitoring and analyzing the Intelligence Community’s reports and integrating them with Customs and other law enforcement-generated intelligence to produce strategic, operational, and tactical intelligence, including specific narcotics enforcement leads, and disseminating it to Customs enforcement elements and executive management. In addition, this branch provides Customs’ input to multiagency analytical products, participates as Customs’ representative in interagency intelligence forums, and coordinates Customs’ national level collection programs by maintaining direct working relationships with its counterparts in other intelligence and law enforcement agencies. The Operations Branch is divided into the Intelligence Support Group and Tactical Intelligence Group. The Intelligence Support Group is responsible for the operations of the watch office and special compartmentalized intelligence facility for receipt, control, handling, and sanitization of classified intelligence from other organizations. The Tactical Intelligence Group oversees the operations of certain operational collection elements. In addition, the Branch provides intelligence support to Customs’ air interdiction efforts (i.e., the Domestic Air Interdiction Center and the Customs National Aviation Center). The Program and Policy Evaluation Branch is responsible for Customs’ liaison with the Intelligence Community and coordination with the Treasury Department. In addition, this branch is responsible for developing and maintaining Customs’ internal operational guidelines, such as its intelligence policy initiatives and requirements. Customs’ intelligence assets in the field include five Area Intelligence Units, special-agent-in-charge (SAC) intelligence units, 17 Intelligence Collection and Analysis Teams, and various multi-agency efforts. Customs’ Area Intelligence Units in New York, Los Angeles, Houston, New Orleans, and Miami are to provide intelligence support services to all of Customs’ enforcement efforts in their regions, including narcotics. These units produce tactical, operational, and strategic products for their regions, including direct case support to criminal investigators, as needed. In addition, intelligence resources are assigned directly to the SACs of each Customs field office. These resources primarily are to provide case support to enforcement groups. Customs’ Intelligence Collection and Analysis Teams provide tactical intelligence to inspectors regarding smuggling activities at their respective ports of entry. These teams’ intelligence is provided to headquarters for fusion with national-level intelligence and further dissemination, as appropriate. Customs relies on DEA and the Intelligence Community for foreign counterdrug intelligence. In addition, Customs provides intelligence assets for various multiagency counterdrug intelligence and/or enforcement efforts, including the Customs-led Combined Agency Border Intelligence Network, which generates unsolicited investigative leads to appropriate agencies regarding nontraditional alien drug-smuggling suspects. In addition, Customs provides intelligence assets to the South Florida Blue Lightning Operations Center, which supports the Blue Lightning Strike Force in addressing drug-smuggling in the southeast. The Center assists in identifying and developing organizational targets for joint investigation by Strike Force members and provides tactical and operational intelligence to support Strike Force operations. Customs intelligence personnel are also assigned to EPIC and the Joint Interagency Task Force East (JIATF-East). Customs’ in-house collection capability is heavily weighted towards human intelligence, which largely comes from Customs inspectors and investigators who obtain information during their normal interdiction and investigation activities. Inspectors and investigators collect photographic intelligence. Investigators also conduct court-authorized electronic surveillance of individuals and organizations suspected of drug-trafficking. In addition, Customs operates intelligence collection programs that provide tactical and operational intelligence to assist Customs in its enforcement and regulatory missions, including counterdrugs. Customs intelligence analysts develop a variety of intelligence products regarding drug-smuggling individuals, organizations, transportation networks, and patterns and trends to further their investigations and seizures and generate other investigative and interdiction targets. Types of counterdrug intelligence products produced by Customs analysts at headquarters and/or field units include investigative support products, such as link and post seizure analyses; threat assessments; trend reports on drug-smuggling and seizure activities and methods; profiles of drug-smuggling organizations; and actionable intelligence reports and narcotics enforcement leads that facilitate the interdiction of drug shipments and/or apprehension of drug traffickers. As of October 1, 1997, Customs was authorized 382 intelligence research specialist positions of which 309 were filled. Customs’ Intelligence and Communication Division estimated that 196 full-time equivalent (FTE) positions (63 percent) represented the proportion of duties that its 309 on-board intelligence research specialists dedicated to counterdrug activities. No estimates were provided for fiscal years 1995, 1996, and 1998. Customs’ Domestic Air Interdiction Coordination Center’s (DAICC) counterdrug mission is to detect drug-smuggling aircraft entering the United States and coordinate their apprehension with appropriate counterdrug enforcement agencies. The DAICC’s counterdrug intelligence activities are conducted under the Customs Service’s statutory authority (i.e., the Tariff Act of 1930, as amended, and certain provisions of U.S.C. 18 and 19), particularly 19 U.S.C. 1590, which addresses smuggling by air. The DAICC represents Customs’ primary detection and monitoring effort. Using a wide variety of civilian and military ground-based radars, tethered aerostat radars, airborne reconnaissance aircraft, and other detection assets, DAICC conducts 24-hour surveillance along the entire southern border of the United States, Puerto Rico, and into the Caribbean. When suspect aircraft are observed crossing the U.S. border, the Center orders the launch of appropriate Customs interdiction aircraft and coordinates apprehension of drug traffickers with the appropriate law enforcement agencies. The Center receives intelligence from the JIATFs and others regarding suspect aircraft entering its area of responsibility. In addition, JIATF-East relies on the Center to sort aircraft as they depart source and transit nations to identify suspect drug-smuggling aircraft coming toward the eastern United States. The DAICC’s Intelligence Branch provides tactical and operational intelligence support regarding air interdiction and enforcement operations to the Center’s director, staff, and radar operations and to other Customs units and agencies in the counterdrug community, such as DOD. The Branch analyzes radar-tracking data of suspected drug-trafficking aircraft crossing or landing short of the U.S. border or making drug airdrops to vessels or vehicles within the U.S. border. The Branch collects and analyzes these tracking data as well as spot reports from various federal, state, and local enforcement agencies and concerned citizens regarding suspicious private aircraft and/or suspected drug-smuggling via private aircraft. Its analyses are provided in usable drug air intelligence formats to appropriate DAICC and Customs officials and other agencies. The types of intelligence products produced by the DAICC’s Intelligence Branch include briefings; spot reports; daily intelligence reports; short landing notifications (i.e., reports of suspect aircraft landings short of the U.S. southwest border); trend publications, such as reports on air-smuggling patterns in certain southwest border areas; and other products to support the interdiction of private aircraft drug smugglers. The Center also produces strategic intelligence in the form of an annual air threat assessment. As of October 1, 1997, DAICC was authorized nine intelligence research specialist positions, of which eight were filled. These personnel are included in the personnel totals for the Customs Service. DAICC spending is included in the Customs Services’ $14.4 million estimate of counterdrug spending for fiscal year 1997. The Financial Crimes Enforcement Network’s (FinCEN) mission is to support and strengthen domestic and international anti money-laundering efforts and to foster interagency and global cooperation to that end through information collection, analysis, and sharing; technical assistance; and innovative and cost-effective implementation of Treasury authorities. FinCEN acts as a link among the law enforcement, financial, and regulatory communities and serves as the nation’s central point for broad-based intelligence and information sharing related to money-laundering and other financial crimes. FinCEN uses counter-money-laundering laws, such as the Bank Secrecy Act, to require reporting and record-keeping by banks and other financial institutions. This record-keeping preserves a financial trail for investigators to follow as they track criminals and their assets. FinCEN provides assistance to federal, state, local, and international agencies engaged in money-laundering prevention and enforcement; an estimated 50 percent of its efforts support counterdrug investigations. FinCEN conducts its activities under the statutory authority of the Bank Secrecy Act (P.L. 91-508, titles I and II (1970)) codified, as amended, in sections of titles 12, 18, and 31 U.S.C. FinCEN primarily provides both operational and strategic analyses to support law enforcement efforts, including counterdrug intelligence efforts. FinCEN’s operational support is designed to provide information and leads on criminal organizations and activities that are under investigation by law enforcement agencies. This support is tailored to individual requests and may range from a basic search of databases on a single subject to detailed, in-depth analysis of the financial aspects of major criminal organizations. Occasionally, FinCEN is also asked to provide time-sensitive tactical support, such as responding to a request for a search of certain databases in support of an imminent arrest. Analysts also provide information through FinCEN’s Artificial Intelligence System on previously undetected possible criminal organizations and activities. FinCEN’s strategic analysis involves collecting, processing, analyzing, and developing intelligence on the emerging trends, patterns, and issues related to the proceeds of illicit activities. As of October 1, 1997, the Office of Investigative Support and Office of Research and Analysis formed the principal FinCEN units that produce intelligence products in support of counterdrug enforcement efforts. The Office of Investigative Support serves as the initial contact point for requests for investigative support from law enforcement agencies and oversees the operations center and Artificial Intelligence Unit. The Office of Research and Analysis identifies, analyzes, and reports on trends, patterns, and fluctuations in currency movements and actual as well as potential money-laundering activity. Using financial, commercial, and law enforcement databases, FinCEN provides various intelligence products that support counterdrug enforcement efforts, including database extracts regarding the financial transactions of a single drug-trafficking subject or detailed, in-depth analyses of financial aspects of major drug-trafficking organizations; self-generated or requested investigative leads developed from its Artificial Intelligence System, such as potential subjects (individuals or organizations) involved in counterdrug-related money-laundering; and reports on emerging trends, patterns, and issues related to money-laundering, such as currency movements to and from federal banks; regional threat assessments to support state-level anti money-laundering legislative efforts; and reports/advisories on country-specific money-laundering threats or other issues. As of October 1, 1997, FinCEN estimated that 90 of its total authorized positions and 83 of its total personnel on board were engaged in counterdrug-related support. These estimates include 45 authorized intelligence analyst positions and 32 on-board intelligence analysts. These figures are based on FinCEN’s estimate that about 50 percent of its resources are engaged in counterdrug-related support. No estimate was provided for 1995. The U.S. Coast Guard is the lead agency for maritime law enforcement. It collects air and maritime intelligence to support illegal alien migration, maritime safety, fisheries violations, and counterdrug operations at the operational level. Its intelligence collection programs include domestic and foreign human, signals, and imagery intelligence sources. The Coast Guard conducts its counterdrug operations under the statutory authority of 14 U.S.C. 2, 14 U.S.C. 89, 14 U.S.C. 93 (e), and 14 U.S.C. 143. The Coast Guard produces tactical, operational, and strategic counterdrug intelligence information to support Coast Guard operations. The Coast Guard interfaces with the Crime and Narcotics Center; the El Paso Intelligence Center; the National Drug Intelligence Center; DEA Intelligence; U.S. Customs Service Intelligence; Joint Interagency Task Forces East and West; and foreign, state, and local law enforcement agencies. The Coast Guard Intelligence Coordination Center provides strategic intelligence support to Coast Guard commands and missions. The Center is divided into four sections: the Watch, Imagery Exploitation, Analysis, and Collections Management sections. The Watch Section is an all-source, 24-hour-a-day operation that provides indications and warning information and serves as a focal point between the Coast Guard and national intelligence centers and law enforcement agencies for substantive level issue coordination. The Imagery Exploitation Section provides tactical and strategic imagery information to support Coast Guard commands and missions and JIATFs East and West. The Analysis Section provides strategic and warning intelligence information to Coast Guard commanders. The Collection Management Section submits Coast Guard intelligence requirements to the Intelligence Community and disseminates proactive and reactive all-source intelligence information to Coast Guard units. The Coast Guard has two Area Intelligence Divisions which provide operational intelligence in support of all Coast Guard missions including counterdrug operations. The Atlantic Area Intelligence Division, located in Portsmouth, Virginia provides intelligence support to the Coast Guard Atlantic Area, Atlantic Maritime Defense Zone, and Fifth Coast Guard Division. The Pacific Area Intelligence Division, in Alameda, California provides intelligence support toto the Coast Guard Pacific Area, Pacific Maritime Defense Zone, and the Eleventh Coast Guard District. The Maritime Intelligence Center in Miami, Florida serves as the intelligence office for the Seventh Coast Guard District and provides tactical intelligence support of counterdrug, illegal alien migration, and national security missions. The Law Enforcement Support Team is an Atlantic Area command located in Miami, Florida which collects tactical intelligence information in support of Coast Guard operations throughout the Atlantic Area, but primarily in support of counterdrug operations in the Caribbean. The Coast Guard’s intelligence offices provide tailored Coast Guard specific counterdrug intelligence such as threat assessments; lookout lists, which are special alerts for persons and vessels; post-seizure reports; spot reports; intelligence briefings; and daily and weekly intelligence summaries. The Coast Guard also provides input to the counterdrug community’s cocaine flow assessment. As of October 1, 1997, the Coast Guard had 108 FTE staff assigned to counterdrug intelligence activities. Of these, 75 FTE are assigned to Coast Guard locations, and 33 were detailed to other organizations. Staffing levels are calculated on a FTE basis because counterdrug intelligence is only one of the missions of Coast Guard Intelligence personnel. Joint Task Force Six’s (JTF-6) mission is to provide effective title 10 domestic counterdrug operational, intelligence, engineer, and general support as requested by law enforcement agencies. JTF-6 conducts its counterdrug activities under the statutory and other authority of Oct. 15, 1989 Forces Command (FORSCOM) Operational Order: National Defense Authorization Act (NDAA) fiscal year 1990; NDAA fiscal year 1991; NDAA fiscal year 1993; Chief Joint Chiefs of Staff Instruction 3710.01 of May 28, 1993; DOD Guidance of Oct. 28, 1993; FORSCOM Counterdrug Employment Guidance of Feb. 14, 1994; FORSCOM Limited Delegation of Authority of Aug. 17, 1994; FORSCOM Message of Aug. 1, 1995; FORSCOM Delegation of Authority of Apr. 16, 1996; and Atlantic Command Message JTF-6 Command Relationship of Apr. 9, 1997. The Intelligence Directorate at JTF-6 provides tactical and operational intelligence in direct support of the command, drug law enforcement agencies, and DOD counterdrug operators in the continental United States. The Directorate comprises three divisions that provide intelligence information for DOD military forces (title 10) conducting counterdrug support missions. These are the Operations Support, Tactical Support, and the Training and Assessment Divisions. The Operations Support Division has three branches: the Collection Management, Foreign Analysis, and Special Security Office branches. The Collection Management Branch supports strategic and tactical collection initiatives; mapping, charting, and geodesy support; and imagery/photo support. The Foreign Analysis Branch provides intelligence to the Commander on subjects such as cross border drug-smuggling methods, transportation modes, and potential drug-smuggling routes to advise drug law enforcement agencies on the best use and positioning of DOD units in counterdrug support operations. The Special Security Office provides secure communications for the Intelligence Directorate. The Tactical Support Division is responsible for developing preparatory products for specific terrain, potential threats, and Force Protection intelligence to ensure the security and safety of title 10 forces on approved counterdrug support missions. The Division also produces specialized intelligence products such as narcotics recognition and booby trap recognition guides for title 10 forces and local drug law enforcement agencies. The Division conducts liaison activities with federal, state, and local drug law enforcement agencies. The Training and Assessment Division provides training to drug law enforcement agencies on intelligence processes and threat assessment development and conducts comprehensive studies of HIDTA intelligence architectures to assist in determining the most effective and productive intelligence structure. JTF-6 issues intelligence estimate reports, which provide information necessary for the planning of counterdrug operations by JTF-6 personnel and deploying title 10 forces. Intelligence estimates include information such as the drug-trafficking threat in the area, methods of operation, geographic and hydrographic features of the area, hazards to counterdrug forces, and probable courses of action. JTF-6 produces terrain analysis guides; intelligence handouts such as A Reference Guide to Mexican Military Vehicles, Weapons and Equipment; imagery reports; vulnerability assessments; and special products such as the Aircraft Recognition Guide and the Booby Trap Guide for Cannabis Missions. As of October 1, 1997, 53 staff were assigned to the Directorate of Intelligence at JTF-6. There are 5 management, 1 planning, 5 security, and 42 analytical personnel. The Joint Interagency Task Force East’s (JIATF-East) mission is to plan, conduct, and direct interagency detection, monitoring, and sorting operations of air and maritime drug-smuggling activities within its area of responsibility. JIATF-East provides tactical and operational intelligence information on these activities to national apprehending authorities or international law enforcement agencies to maximize the disruption of drug transshipments within the transit zone in their area of responsibility of the Atlantic, Caribbean, and Eastern Pacific oceans. JIATF-East conducts its counterdrug intelligence activities under the statutory authority of the National Interdiction Command and Control Plan dated October 9, 1997. JIATF-East conducts intelligence fusion activities necessary to support its internal operations, the operational forces under its tactical control, and support for law enforcement activities in its area of responsibility. The JIATF-East Intelligence Directorate is divided into the Collections Management and Operational Intelligence Divisions. Although not a subordinate element, the Cryptologic Services Group supports the Intelligence Directorate in the all-source fusion and analysis process. The Collections Management Division develops and manages all-source intelligence collection programs to support the detection and monitoring mission. The Division also manages an imagery cell, which is responsible for imagery exploitation supporting the Watch and Tactical Support Branches. The Operational Intelligence Division is the analytical component of the Intelligence Directorate and is divided into five branches: Watch; Tactical Support; Trends; Estimates and Special Projects; Information Management; and Tactical Analysis Teams and Liaison Officers (TAT/LNO). The TAT/LNO Branch is responsible for managing remote analytical elements in locations throughout the JIATF-East area of responsibility. These elements support JIATF-East and in-country drug law enforcement operations. The Cryptologic Services Group is a representative of the National Security Agency and provides signals intelligence support directly to the Director, JIATF-East. JIATF-East counterdrug intelligence products include analyts, which are abbreviated analyses of specific topics; daily intelligence summaries; spot reports; quarterly threat assessments; and monthly priority vessel lists. As of October 1, 1997, JIATF-East had 76 staff assigned to its Intelligence Directorate. There were 33 analysts, 12 watch personnel, 18 security and 3 collection management personnel, 3 liaison officers, and 7 administrative support staff. The Joint Interagency Task Force West’s (JIATF-West) mission is to apply DOD-unique resources to conduct detection and monitoring operations and to support efforts of law enforcement agencies and country teams to disrupt international drug-trafficking throughout the U.S. Pacific Command area of responsibility. JIATF-West conducts its counterdrug intelligence activities under the statutory authority of 10 U.S.C. section 371; the Defense Authorization Act of fiscal year 1991, as amended; PDD 35; PDD 14; PDD 44; the National Interdiction Command and Control Plan of 1994; and the National Drug Control Strategy. JIATF-West provides all-source tactical and operational intelligence support to operational assets, law enforcement agencies, and the Director, JIATF-West. The Intelligence Directorate at JIATF-West has three major branches: the Operations Intelligence, Collections, and Analysis branches. The Operations Intelligence Branch provides direct intelligence support to detection and monitoring forces operating under JIATF-West control. The Collections Branch conducts coordination to employ theater and national collection assets to fill counterdrug intelligence requirements as identified by JIATF-West analysts and law enforcement agencies. The Analysis Branch consists of three teams which are organized geographically based upon the drug-trafficking threat in the Pacific Command area of operations. These three teams are the Southeast Asia, Southwest Asia, and Latin American teams. The Southeast and Southwest Asia teams are primarily focused on heroin movement to the United States, with a secondary focus on cannabis and methamphetamine trafficking. The Latin American team focuses on cocaine, particularly as it impacts the Eastern Pacific area of responsibility. All three teams fuse all-source intelligence in support of specific law enforcement case requirements and identify intelligence gaps and requirements. In addition, they deploy in support of Country teams and DEA offices throughout Asia. The Intelligence Directorate at JIATF-West develops and disseminates tactical and operational studies to law enforcement agencies, the counterdrug intelligence community and country teams as appropriate. The Directorate also develops products based on standing requirements including counterdrug intelligence updates, target studies, and various country and regional assessments. The Directorate also disseminates time-sensitive, all-source fused intelligence in the form of advisories, and analyts to forces operating under their tactical control as well as other interested organizations. Other products include telephone toll analyses, link charts and data bases to support specific case requirements and law enforcement agencies. As of October 1, 1997, JIATF-West had 33 of the 38 authorized positions filled in its Intelligence Directorate. There were 19 analysts, 6 watch personnel, 4 management and administrative staff, and 4 collection managers. The Joint Interagency Task Force South’s (JIATF-South) mission is to execute national counterdrug policy by supporting U.S. federal agencies and participating nations counterdrug efforts to deter, degrade, and disrupt the production and transshipment of illegal drugs within the JIATF-South area of responsibility, which includes the land mass of South America. JIATF-South conducts its counterdrug intelligence activities under the statutory authority of 10 U.S.C. section 124. While the JIATF-South was designated as a JIATF, it did not have a formally approved JIATF organization structure until March 16, 1998. Before then, it had been operating as a part of the operations group at the U.S. Southern Command. The JIATF-South Intelligence Directorate provides tactical and operational intelligence support to the JIATF-South counterdrug mission. The Intelligence Directorate conducts an analysis of source zone illicit drug targets and is responsible for integrating the efforts of several nations in the counterdrug effort. JIATF-South products include daily and weekly intelligence summaries and forecasts, monthly reports on TAT activities, briefings to JIATF-South operational elements, The Quarterly Source Region Cocaine Movement Assessment, and a link analysis of drug-trafficking organizations. As of October 1, 1997, the JIATF-South Intelligence Directorate had 19 personnel assigned of the 35 authorized. There were 17 analysts and 2 management personnel. The Office of Naval Intelligence (ONI) Counterdrug Division’s mission focuses on detecting and monitoring illicit drug activities in the transit zone (from the shore of the source country to the U.S. shore). The division satisfies goals 4 and 5 of the counterdrug strategy, with emphasis placed on detection and monitoring activities. ONI conducts its counterdrug intelligence activities under the statutory authority of 10 U.S.C. sections 124 and 371, et seq. The Counterdrug Division provides strategic and tactical maritime intelligence primarily to federal law enforcement and DOD counterdrug agencies. With the exception of the staff assigned to the Coast Guard Intelligence Coordination Center or detailed to other organizations, all of ONI’s counterdrug intelligence analysts are assigned to the Counterdrug Division of the Civil Maritime Analysis Department. ONI provides reports and access to maritime databases that support law enforcement’s interdiction efforts. For example, ONI produces Hidden Compartments, which provides information, photographs, and plans for various concealment methods on known smuggling vessels, and the Maritime Drug Smuggler’s Handbook, which provides information on smuggling groups and their associated vessels. ONI also maintains a database on ships over 100 gross tons and recently began maintaining a section of the database on vessels under 100 gross tons, as these vessels are most often used in drug-smuggling. ONI also provides extensive on-scene maritime intelligence support to Latin America Tactical Analysis Teams and regional law enforcement agencies that are working cases involving international maritime drug transshipments. As of October 1, 1997, 27 of the 29 authorized intelligence positions in the ONI Counterdrug Division were filled. Four were imagery analysts who worked at the Coast Guard Intelligence Coordination Center, 1 person was assigned to the Director if Central Intelligence Crime and Narcotics Center, 1 was assigned to the El Paso Intelligence Center, and the 21 remaining staff worked at ONI. The Tactical Analysis Teams’ (TAT) mission is to serve as conduits for timely, geographically focused tactical and operational counterdrug intelligence analyses for U.S. embassy country teams, DOD theater staff, host nations, and U.S. law enforcement agencies. TATs analyze foreign intelligence products for use in support of counterdrug detection, monitoring, and interdiction in Central America and South America, Mexico, and the Caribbean. The U.S. Southern Command and JIATF-East provide oversight management, but TAT staff work under the operational control of the Deputy Chief of Mission and receive day-to-day guidance from the DEA’s country attache. The TATs conduct their counterdrug intelligence activities under the statutory authority of 10 U.S.C. 371 (c), 10 U.S.C. 373 (1), 10 U.S.C. 374 (b)(2)(D), and Secretary of Defense/Atlantic Command deployment orders. TATs are organized differently at the individual embassies they support and provide fused operational intelligence support for host nation interdiction of air and maritime narcotics shipments and processing (airdrops, coastal and offshore deliveries, at-sea transshipments, clandestine airstrips, laboratory locations, and cache sites). The TATs do not provide products. Instead, they fuse information from a variety of sources into a collective database. When operational information is needed, intelligence extracts are produced from the database and are passed to the various embassy agencies. As of October 1, 1997, 42 personnel were stationed at 13 TATs in embassies in Mexico and South America. About $2 million a year that is spent by TAT personnel at other embassies is funded under the JIATF-South Intelligence Directorate budget and is not included in the table. The mission of the State Department’s Bureau of International Narcotics and Law Enforcement Affairs is to assist foreign governments in eradicating narcotics crops, destroying illicit laboratories, training interdiction personnel, and developing education programs to counter drug abuse. The Bureau of Intelligence and Research (INR) produces finished intelligence analyses primarily for the Secretary of State and her principal advisers as well as for other senior Washington policymakers. It also participates in a number of Intelligence Community-wide activities, including assessments and estimates. The State Department conducts its counterdrug activities under 22 U.S.C. 2651, which establishes the State Department, and sections 481 and 482 of the Foreign Assistance Act (22 U.S.C. 2291). Within INR, the Office of Analysis for Terrorism, Narcotics, and Crime provides briefings and analyses related to illicit narcotics to Bureau of International Narcotics and Law Enforcement and senior Department personnel. The Office analyzes news articles, foreign service reports, and intelligence products. While the Department of State is not formally engaged in intelligence collection, diplomatic reporting provides human intelligence that is available not only to the Office of Analysis for Terrorism, Narcotics, and Crime but also to other intelligence consumers. INR’s six regional analysis offices occasionally produce finished intelligence on drug-related issues, generally dealing with the counterdrug policies of foreign governments. The Office of Analysis for Terrorism, Narcotics, and Crime prepares finished intelligence analyses for The Secretary’s Morning Intelligence Summary as well as ad hoc memoranda and spot reports to specific senior Department policymakers. The Office also participates in the preparation of the International Narcotics Strategy Report used in the annual drug “certification” exercise. As of October 1, 1997, the State Department had three full-time intelligence research specialists for narcotics analysis. The High Intensity Drug-Trafficking Area (HIDTA) Program’s counterdrug intelligence mission is to facilitate the flow of intelligence among federal, state, and local law enforcement agencies. ONDCP conducts this program under the Anti-Drug Abuse Act of 1988, as amended (P.L. 100-690, Nov. 18, 1988). As of October 1, 1997, the Director of the ONDCP had designated 17 areas as the most critical drug-trafficking areas of the United States. Funded by ONDCP, the HIDTA Program attempts to reduce drug-trafficking in the United States by developing partnerships among local, state, and federal drug-control agencies in the designated areas and creating systems for them to coordinate their efforts. Program guidance requires that each HIDTA develop a regional intelligence unit to provide for event and criminal subject deconfliction, ad hoc post-drug seizure analysis, collocated access to major databases, access to regional intelligence, and collection of trend and pattern data. As of October 1, 1997, all 17 HIDTAs had intelligence units in various stages of development. Most HIDTAs have one regional intelligence unit to provide intelligence and information-sharing services. Some HIDTAs have more than one intelligence unit that together makes up the HIDTA intelligence initiative for that area. For example, the Los Angeles HIDTA intelligence initiative includes (1) the Los Angeles Joint Drug Intelligence Group, which conducts strategic and operational intelligence analyses; (2) the Los Angeles County Regional Criminal Information Clearinghouse, which provides 24-hour deconfliction and tactical intelligence services and operational analyses; and (3) the Inland Narcotics Clearinghouse, which primarily provides intelligence support to enforcement efforts in its designated geographic area within the overall HIDTA. In addition to the intelligence units, each HIDTA is required to develop joint drug task force enforcement efforts, focusing on the most significant international, national, regional and local drug-trafficking and drug money-laundering organizations operating in its area. These task forces, as well as the participating federal, state, and local enforcement agencies, play a principal role in counterdrug intelligence collection efforts at the HIDTAs. HIDTA task forces and participating law enforcement officers collect human and photographic intelligence and conduct court-authorized electronic surveillance of individuals and organizations suspected of illegal drug activities during their normal enforcement activities. The types of intelligence products produced by many HIDTA intelligence units include responses to queries received from HIDTA task forces and federal, state, and local law enforcement agencies within their areas, such as information obtained and/or analyzed from their search of various law enforcement and commercial databases; organizational or individual profiles that provide information on specific drug organizations or individual traffickers; telephone toll and other link analyses; post-drug seizure analyses; annual assessments of the drug threat in their areas and how that threat may impact other areas; officer bulletins regarding particular counterdrug enforcement issues; monthly intelligence summaries; and analyses, such as charts and graphics, prepared for presentation at trials or grand juries. Federal counterdrug intelligence personnel that participate in the HIDTA program are included in their respective parent agency’s personnel estimates. No amounts were provided for 1995 and 1996. ONDCP has established five goals to reduce illegal drug use: 1. Educate and enable America’s youth to reject illegal drugs as well as alcohol and tobacco. 2. Increase the safety of America’s citizens by substantially reducing drug-related crime and violence. 3. Reduce health and social costs to the public of illegal drug use. 4. Shield America’s air, land, and sea frontiers from the drug threat. 5. Break foreign and domestic drug sources of supply. Three of the strategy’s five goals—goals 2, 4, and 5—are supported by counterdrug intelligence, which plays an important role in the execution of the strategy. The following table identifies which goal each counterdrug intelligence organization supports. Legend: X indicates the 1997 National Drug Control Strategy goals supported by each counterdrug intelligence organization’s efforts. - Headquarters, Washington, D.C. - Special Operations Division, Newington, Va. - El Paso Intelligence Center, El Paso, Tex. - El Paso District Office, El Paso, Tex. - San Diego Division, San Diego, Calif. - Headquarters, Washington, D.C. - Field Office, El Paso, Tex. - Headquarters, Johnstown, Penn. - Headquarters, Washington, D.C. - Border Patrol - El Paso Sector, El Paso, Tex. - San Diego Sector, San Diego, Calif. - Headquarters, Washington, D.C. - El Paso SAC Office, El Paso, Tex. - Bridge of the Americas Port of Entry, El Paso, Tex. - San Diego SAC Office, San Diego, Calif. - Otay Mesa Cargo Facility, Otay Mesa, Calif. - Domestic Air Interdiction Coordination Center, Riverside, Calif. Financial Crimes Enforcement Network, Washington, D.C. - Headquarters, Washington, D.C. - Intelligence Coordination Center, Suitland, Md. - Headquarters, Washington, D.C. Office of Drug Enforcement Policy and Support, Washington, D.C. Joint Task Force Six, El Paso, Tex. Joint Interagency Task Force East, Key West, Fla. Joint Interagency Task Force West, Alameda, Calif. - Counterdrug Analysis Directorate, Crystal City, Va. - Headquarters, Fort Meade, Md. - Office of Naval Intelligence - Counterdrug Analysis Division, Suitland, Md. National Imagery and Mapping Agency - Transnational Issues Division - Counterdrug Branch, Crystal City, Va. - Headquarters, Washington, D.C. - Bureau of Intelligence and Research - Office of Analysis for Terrorism, Narcotics, and Crime, Washington, D.C. Office of National Drug Control Policy - Headquarters, Washington, D.C. - Los Angeles High Intensity Drug-Trafficking Area, Los Angeles, Calif. - Southwest Border High Intensity Drug-Trafficking Area, San Diego, Calif. - West Texas High Intensity Drug-Trafficking Area Partnership, El Paso Tex. Robert P. Glick, Senior Evaluator Amy E. Lyon, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO evaluated federal counterdrug intelligence coordination efforts, focusing on identifying: (1) organizations that collect or produce counterdrug intelligence; (2) the role of these organizations; (3) federal funding they receive; and (4) the number of personnel that support this function. GAO noted that: (1) more than 20 federal or federally funded organizations, spread across 5 cabinet-level departments and 2 cabinet-level organizations, have a principal role in collecting or producing counterdrug intelligence; (2) together, these organizations collect domestic and foreign counterdrug intelligence information using human, electronic, photographic, and other technical means; (3) this information is used by U.S. policymakers to formulate counterdrug policy and by law enforcement agencies to learn about the groups that traffic in drugs and to identify the points at which drug-trafficking operations are the most vulnerable; (4) the amount of federal funds spent on counterdrug intelligence programs and activities and the number of federal personnel assigned to counterdrug intelligence functions are difficult to determine; (5) there is no governmentwide budget or single source from which to obtain this data, including the Office of National Drug Control Policy's National Drug Control Strategy Budget Summary; (6) in addition, it is difficult to determine the spending and personnel for counterdrug intelligence because: (a) most organizations do not have separate budget line items and are not authorized personnel positions specifically for counterdrug intelligence; and (b) some agency functions and personnel serve multiple purposes or support multiple missions; (7) unclassified information reported to GAO by counterdrug intelligence organizations show that over $295 million was spent for counterdrug intelligence activities during fiscal year 1997 and over 1,400 federal personnel were engaged in these activities; and (8) the Departments of Justice, the Treasury, and Defense account for over 90 percent of the money spent and personnel involved.
The federal Food Stamp Program is intended to help low-income individuals and families obtain a more nutritious diet by supplementing their income with benefits to purchase eligible foods at authorized food retailers, such as meat, dairy products, fruits, and vegetables, but not items such as soap, tobacco, or alcohol. FNS pays the full cost of food stamp benefits and shares the states’ administrative costs—with FNS usually paying slightly less than 50 percent—and is responsible for promulgating program regulations and ensuring that state officials administer the program in compliance with program rules. The states administer the program by determining whether households meet the program’s eligibility requirements, calculating monthly benefits for qualified households, and issuing benefits to participants through an electronic benefits transfer system. In fiscal year 2005, the Food Stamp Program issued almost $28.6 billion in benefits to about 25.7 million individuals per month, and the maximum monthly food stamp benefit for a household of four living in the continental United States in fiscal year 2007 was $518. As shown in figure 1, program participation decreased during the late 1990s, partly due to an improved economy, but rose again from 2000 to 2005. The number of food stamp recipients follows the trend in the number of people living at or below the federal poverty level. In addition to the economic growth in the late 1990s, another factor contributing to the decrease in number of participants from 1996 to 2001 was the passage of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA), which added work requirements and time limits to cash assistance and made certain groups ineligible to receive food stamp benefits. In some cases, this caused participants to believe they were no longer eligible for food stamps when TANF benefits were ended. Since 2000, that downward trend has reversed, and experts believe that the downturn in the U.S. economy, coupled with changes in the Food Stamp Program’s rules and administration, has led to an increase in the number of food stamp participants. Eligibility for participation in the Food Stamp Program is based primarily on a household’s income and assets. To determine a household’s eligibility, a caseworker must first determine the household’s gross income, which cannot exceed 130 percent of the poverty level for that year as determined by the Department of Health and Human Services, and net income, which cannot exceed 100 percent of the poverty level (or about $1,799 per month for a family of three living in the continental United States in fiscal year 2007). Net income is determined by deducting from gross income a portion of expenses such as dependent care costs, medical expenses for elderly individuals, utilities costs, and housing expenses. The application process for the Food Stamp Program requires households to complete and submit an application to a local assistance office, participate in an interview, and submit documentation to verify household circumstances (see table 1). Applicants may need to make more than one visit to the assistance office to complete the application process. After eligibility is established, households are certified eligible for food stamps for periods ranging from 1 to 24 months, depending on household circumstances and state policy. While households are receiving benefits, they must report changes in household circumstances that may affect eligibility or benefit amounts. States may choose to require households to report changes within 10 days of occurrence (incident reporting) or at specified intervals (periodic reporting). States also have the option to adopt a simplified system, which further reduces the burden of periodic reporting by requiring households to report changes that happen during a certification period only when their income rises above 130 percent of the federal poverty level. Once the certification period ends, households must reapply for benefits, at which time eligibility and benefit levels are redetermined. The recertification process is similar to the application process. Households can be denied benefits or have their benefits end at any point during the process if they are determined ineligible under program rules or for procedural reasons, such as missing a scheduled interview or failing to provide the required documentation. While applying for and maintaining food stamp benefits has traditionally involved visiting a local assistance office, states have the flexibility to give households alternatives to visiting the office, such as using the mail, the telephone, and on-line services to complete the certification and recertification process. Alternative methods may be used to support other programs, such as Medicaid or TANF, since some food stamp participants receive benefits from multiple programs. Figure 2 illustrates a traditional office-based system and how states can use a number of alternative methods to determine applicants’ eligibility without requiring them to visit an assistance office. FNS and the states share responsibility for implementing a quality control system used to measure the accuracy of caseworker decisions concerning the amount of food stamp benefits households are eligible to receive and decisions to deny or end benefits. The food stamp payment error rate is calculated by FNS for the entire program, as well as every state, by adding overpayments (including payments higher than the amounts households are eligible for or payments to those who are not eligible for any benefit), and underpayments (payments lower than the amounts households are eligible for). The national payment error rate has declined by about 40 percent between 1999 and 2005, from 9.86 percent to a record low of 5.84 percent. FSP payment errors are caused primarily by caseworkers, usually when they fail to keep up with new information, and by participants when they fail to report needed information. Another type of error measured by FNS is the negative error rate, defined as the rate of cases denied, suspended, or terminated incorrectly. An example of incorrectly denying a case would be if a caseworker denied a household participation in the program because of excess income, but there was a calculation error and the household was actually eligible for benefits. FNS also monitors individual fraud and retailer trafficking of food stamp benefits. According to our survey, almost all states allow households to submit applications, report changes, and submit recertifications through the mail, and 26 states have implemented or are developing systems to allow households to perform these tasks on-line. Almost half of the states are using or developing call centers and states are also using flexibility authorized by FNS to increase use of the telephone as an alternative to visiting the local assistance office. States have taken a variety of actions to help households use on-line services and call centers, such as sending informational mailings, holding community meetings, and using community partners to assist households. Many states are allowing households to apply for food stamp benefits, report changes in household circumstances, and complete recertification through the mail and on-line. Mail-In Procedures. Results of our survey show that households can submit applications through the mail in all states, report changes through the mail in all but 1 state, and submit recertifications through the mail in 46 states. For example, Washington state officials told us that the recertification process involves mailing a recertification application package to households that they can mail back without visiting a local assistance office. On-line Services. All states we surveyed reported having a food stamp application available for households to download from a state website, as required by federal law, and 26 states (51 percent) have implemented or are developing Web-based systems in which households can submit initial applications, report changes, or submit recertifications on line (see fig. 3). Most on-line applications were made available statewide and implemented within the last 3 years and states developing on-line services plan to implement these services within the next 2 years. All of the 14 states that reported currently providing on- line services allow households to submit initial food stamp applications on-line, but only 6 states allow households to report changes and 5 states allow households to complete recertification on- line. Of the 14 states that reported using on-line applications, 2 reported they were only available in certain areas of the state. Only two states (Florida and Kansas) reported in our survey that the state closed program offices or reduced staff as a result of implementing on-line services. On-line services available (14) Not using on-line services (25) Many states are using call centers, telephone interviews, or other technologies to help households access food stamp benefits or information without visiting a local assistance office. Call Centers. Nineteen states (37 percent) have made call centers available to households and an additional 4 states (8 percent) have begun development of call centers that will be available to households in 2007 (see fig. 4). Households have been able to use call centers in seven states for more than 3 years. Of the 19 states using call centers, 10 reported that call centers were only available in certain areas of the state. Only two states (Texas and Idaho) reported using private contractors to operate the call centers, but Texas announced in March 2007 that it was terminating its agreement with the private contractor (see fig. 10 for more details). FNS officials told us that the Idaho private call center provides general food stamp program information to callers, while inquiries about specific cases are transferred to state caseworkers. Indiana reported in our survey that the state plans to pilot call centers in certain areas of the state in August 2007 using a private contractor and complete a statewide transition in March 2008. Only two states (Florida and Arizona) reported in our survey that the state closed offices or reduced staff as a result of implementing call centers. Most states with call centers reported that households can use them to report changes in household circumstances, request a food stamp application and receive assistance filling it out, receive information about their case, or receive referrals to other programs. Only four states reported using their call centers to conduct telephone interviews. For example, local officials in Washington told us that households use their call center primarily to request information, report changes in household circumstances, and request an interview. Telephone interviews are conducted by caseworkers in the local assistance office. Telephone Interviews. Many states are using the flexibility provided by FNS to increase the use of the telephone as an alternative to households visiting the local assistance office. For example, FNS has approved administrative waivers for 20 states that allow states to substitute a telephone interview for the face-to-face interview for all households at recertification without documenting that visiting the assistance office would be a hardship for the household. In addition to making it easier on households, this flexibility can reduce the administrative burden on the state to document hardship. FNS also allows certain states implementing demonstration projects to waive the interview requirement altogether for certain households. States we reviewed varied in terms of the proportion of interviews conducted over the phone. For example, Florida state and local officials estimated that about 90 percent of the interviews conducted in the state are completed over the telephone. Washington state officials estimated that 10 percent of application interviews and 30 percent of recertification interviews are conducted by phone. Table 2 describes the types of flexibility available to states and how many are taking advantage of each. Other Technologies. Some states reported implementing other technologies that support program access. Specifically, according to our survey, 11 states (21 percent) have implemented an Integrated Voice Response (IVR) system, a telephone system that provides automated information, such as case status or the benefit amount, to callers but does not direct the caller to a live person. In addition, 11 states (21 percent) are using document management/imaging systems that allow case records to be maintained electronically rather than in paper files. All five of the states we reviewed have implemented in at least certain areas of their state mail-in procedures, on-line services, call centers, waiver of face-to-face interview at recertification, and document management/imaging systems. Three of the five states (Florida, Texas, and Washington) have implemented an integrated voice response system and two (Florida and Utah) have implemented a waiver of the face-to-face interview at initial application. States have taken a variety of actions to help households use on-line services and call centers, such as sending informational mailings, holding community meetings, and employing call center staff who speak languages other than English as shown in figures 5 and 6. States are using community-based organizations, such as food banks, to help households use alternative methods. All states implementing on-line services (14) and about half of states with call centers (10 of 19) use community partners to provide direct assistance to households. Among the states we reviewed, four provide grants to community-based organizations to inform households about the program and help them complete the application process. For example, Florida closed a third of its local assistance offices and has developed a network of community partners across the state to help households access food stamps. Florida state officials said that 86 percent of the community partners offer at least telephone and on-line access for completing and submitting food stamp applications. Community partner representatives in Washington, Texas, and Pennsylvania said that they sometimes call the call center with the household or on their behalf to resolve issues. Pennsylvania provides grants to community partners to help clients use the state’s on-line services. In addition to the assistance provided by community-based organizations, H&R Block, a private tax preparation firm, is piloting a project with the state of Kansas where tax preparers who see that a household’s financial situation may entitle them to food stamp benefits can electronically submit an application for food stamps at no extra charge to the household. Insufficient information is available to determine the results of using alternative methods to access the Food Stamp Program, but state and federal officials report that alternative methods are helping some households. Few evaluations have been conducted that identify the effect of alternative methods on food stamp program access, decision accuracy, or administrative costs. Although states monitor the implementation of alternative methods, isolating the effects of specific methods is difficult, in part because states typically have implemented a combination of methods over time. Despite the limited information on the effectiveness of alternative methods, federal and state officials believe that these methods can help many households by making it easier for them to complete the application or recertification process. However, technology and staffing challenges can hinder the use of these methods. Few federal or state evaluations have been conducted to identify how using alternative methods, such as on-line applications or call centers, affects access to the Food Stamp Program, the accuracy of caseworker decisions about eligibility and benefit amounts, or administrative costs. Few evaluations have been conducted in part because evaluating the effectiveness of alternative methods is challenging, given that limited data are available, states are using a combination of methods, and studies can be costly to conduct. FNS and ERS have funded studies related to improving Food Stamp Program access, but none of these previous studies provide a conclusive assessment of the effectiveness of alternative methods and the factors that contribute to their success (see app. I for a list of the studies we selected and reviewed). Although these studies aimed to evaluate local office practices, grants, and demonstration projects, the methodological limitations of this research prevent assessments about the effectiveness of these efforts. An evaluation of the Elderly Nutrition Demonstration projects used a pre-post comparison group design to estimate the impact of the projects and found that food stamp participation among the elderly can be increased. Two of the projects evaluated focused on making the application process easier by providing application assistance and simplifying the process, in part by waiving the interview requirement. However, one of the drawbacks of this study is that its findings are based on a small number of demonstrations, which affects the generalizability of the findings. Two related FNS-funded evaluations are also under way, but it is unlikely these studies will identify the effects of using alternative methods. An implementation study of Florida’s efforts to modernize its system using call centers and on-line services involves a descriptive case study to be published in late summer 2007, incorporating both qualitative and quantitative data. The objectives of the study are to: describe changes to food stamp policies and procedures that have been made in support of modernization; identify how technology is used to support the range of food stamp eligibility determination and case management functions; and describe the experiences of food stamp participants, eligible non-participants, state food stamp staff, vendors, and community partners. This study will describe Florida’s Food Stamp Program performance over time in comparison to the nation, other states in the region, and other large states. Performance data that will be reviewed includes program participation in general and by subgroup, timeliness of application processing, payment error rates, and administrative costs. However, the study will not isolate the effect of the modernization efforts on program performance. A national study of state efforts to enhance food stamp certification and modernize the food stamp program involves a state survey and case studies of 14 states and will result in a site visit report in late summer 2007, a comprehensive report in March 2009, and a public-use database systematically describing modernization efforts across all the states in May 2009. The national study will focus on four types of modernization efforts: policy changes to modernize FSP application, case management, and recertification procedures; reengineering of administrative functions; increased or enhanced use of technology; and partnering arrangements with businesses and nonprofit organizations. The goals of the study include documenting outcomes associated with food stamp modernization and examining the effect of these modernization efforts on four types of outcomes: program access, administrative cost, program integrity, and customer services. This study will compare performance data from the case study states with data from similar states and the nation as a whole, however, this analysis will not determine whether certain modernization efforts caused changes in performance. USDA has also awarded $5 million in fiscal year 2006 to 5 grantees in Virginia, California, Georgia and Alabama to help increase access to the program, but there is currently no plan to publish an evaluation of the outcomes of these projects. The participation grants focus on efforts to simplify the application process and eligibility systems and each grantee plans to implement strategies to improve customer service by allowing Web-based applications and developing application sites outside the traditional social services office. Grantees are required to submit quarterly progress reports and final reports including a description of project activities and implementation issues. Although few evaluations have been conducted, FNS monitors state and local offices and tracks state implementation of alternative methods to improve program access. FNS also collects and monitors data from states, such as the number of participants, amount of benefits issued, participation rates overall and by subgroup, timeliness of application processing, payment errors, negative errors, and administrative costs. FNS regional offices conduct program access reviews of selected local offices in all states to determine whether state and/or local policies and procedures served to discourage households from applying for food stamps or whether local offices had adopted practices to improve customer service. FNS also monitors major changes to food stamp systems using a process where FNS officials review and approve plans submitted by states related to system development and implementation, including major upgrades. States like Texas, Florida, and Indiana that have implemented major changes to their food stamp system, such as moving from a local assistance office service delivery model to call centers and on- line services, have worked with FNS through this process. Figure 7 describes FNS’s monitoring of Indiana’s plan to implement alternative access methods. FNS has also encouraged states to share information about their efforts to increase access among states, but states reported needing additional opportunities to share information. FNS has funded national and regional conferences, travel by state officials to visit other states to learn about their practices, as well as provided states a guide to promising practices for improving program access. The guide contains information about the goal of each practice, the number of places where the practice is in use, and contact information for a person in these offices. However, this guide has not been updated since 2002 and, for the most part, does not include any evidence that these efforts were successful or any lessons that were learned from these or other efforts. In 2004, in response to recommendations from our prior report, FNS compiled and posted 19 practices aimed to improve access from 11 states. FNS also has a form available on its website where states can submit promising practices to improve access, but to date, practices from this effort have not been published. In our survey, 13 states (about 25 percent) reported needing additional conferences or meetings with other states to share information. States also report monitoring use of alternative methods in the Food Stamp Program, but have not conducted evaluations of their effectiveness. In our survey, states reported monitoring several aspects of the performance of on-line services. As shown in figure 8, states most commonly used the number of applications submitted, the number of applications terminated before completion, and customer satisfaction to monitor the performance of on-line services. For example, Pennsylvania state officials monitor performance of their on-line system and meet regularly with community partners that help households submit applications for benefits to obtain feedback on how they can improve the system. Florida state officials told us they use responses to on-line feedback surveys submitted at the end of the on-line application to assess customer satisfaction with the state’s on-line services. States also reported in our survey monitoring several aspects of the performance of their call centers. As shown in figure 9, most states with call centers reported monitoring the volume of transactions and calls to the center, customer satisfaction, the rate of abandoned calls, and the length of time callers are on hold before speaking with a caseworker. For example, Utah officials monitor several measures and added additional staff to the call center after observing increased hold times when they were implementing the call center to serve the Salt Lake City area. In addition, Washington state officials told us that they monitor call centers on an hourly basis, allowing call center managers to quickly increase the number of staff answering phones as call volumes increase. Despite these monitoring efforts, no states reported conducting an evaluation of the effectiveness of on-line services in our survey and only one state reported conducting such an evaluation of its call centers. The report Illinois provided on its call center described customer and worker feedback on the performance of the call center, but did not provide a conclusive assessment of its effectiveness. Seven states implementing Combined Application Projects (CAP) have submitted reports to FNS including data on the number of participants in the CAP project compared with when the project began, but do not use methods to isolate the effect of the project or determine whether participation by SSI recipients would have increased in the absence of the project. Two of the five states we reviewed said they planned to conduct reviews of their system. For example, Washington is conducting an internal quality improvement review of its call centers. It will compare call center operations with industry best practices and promising new technologies, and will identify the costs, services offered, and best practices used by the call centers. Few evaluations have been conducted, in part because evaluating the effectiveness of alternative methods is challenging. For example, states are limited in their ability to determine whether certain groups of households are able to use alternative methods because few states collect demographic information on households that use their on-line services and call centers. Only six states reported in our survey that they collect demographic information on the households that use on-line services and four states reported collecting demographic information on the households that use call centers. In addition, although FNS is requiring states with waivers to the face-to-face interview to track the payment accuracy of cases covered by these waivers, FNS has not yet assessed the effects of these methods on decision accuracy because it has not collected enough years of data to conduct reliable analyses of trends. Further, evaluations that isolate the effect of specific methods can be challenging because states implement methods at different times and are using a combination of methods. For example, Washington state implemented call centers in 2000, an on-line application and CAP in 2001, and document imaging and a waiver of the face-to-face interview at recertification in 2003. Sophisticated methodologies often are required to isolate the effects of certain practices or technologies. These studies can be costly to conduct because the data collection and analysis can take years to complete. For example, the two studies that we reviewed that aimed to isolate the effects of specific projects each cost over $1 million and were conducted over more than 3 years. Although evaluating the effects of alternative methods is challenging, FNS is collecting data from states through the waiver process that could be analyzed and previous ERS- funded studies have used methodologies that enable researchers to identify the effect of certain projects or practices on program access. Despite the limited information on the effects of alternative methods, federal and state officials report that alternative methods, such as the availability of telephone interviews, can help many types of households by making it easier for them to complete the food stamp application or recertification process. Some state and local officials and community partners noted, however, that certain types of households may have difficulty using some methods. Moreover, some officials also described how technology and staffing challenges can hinder the use of these methods. According to federal and state officials we interviewed, alternative methods can help households in several ways, such as increasing flexibility, making it easier to receive case information or report changes to household circumstances, or increasing efficiency of application processing. In addition, community partner representatives from some states we reviewed said that the availability of telephone interviews helps reduce the stigma of applying for food stamp benefits caused by visiting an assistance office. Increased flexibility. Federal officials from the seven FNS regional offices said that alternative methods help households by reducing the number of visits a household makes to an assistance office or by providing additional ways to comply with program requirements. Moreover, all of the states in our survey that currently have on-line services and more than half of the states that currently operate call centers, reported that reducing the number of visits an individual must make to an office as a reason for implementing the alternative methods. For example, in Florida a household may submit an application or recertification through any one of the following access points -- on-line, mail, fax, community partner site, or in-person at the local assistance office. Additionally, in certain areas of Texas, it is possible for households to apply for food stamps without ever visiting a local assistance office because the state has made available phone interviews and on-line services. Reducing the number of required visits can be helpful for all households, according to state officials or community partner representatives in two of the states we reviewed. Easier access to case information and ability to report changes. According to officials in the five states we reviewed, alternative methods, such as call centers, automated voice response systems, or electronic case records, make it easier for households to access information about their benefits and report changes to household circumstances. For example, in Washington, a household may call the automated voice response system 24 hours a day, 7 days a week to immediately access case information, such as appointment times or whether their application has been received or is being processed. If the household has additional questions, they can call the call center where a call center agent can view their electronic case record and provide information on the status of their application, make decisions based on changes in household circumstances reported to them, inform them of what verification documents are needed or have been received, or perform other services. Increased efficiency. State or local officials from four of the states we reviewed said that implementation of document management/imaging systems improves application processing times, while local officials in two of the states said that call centers help caseworkers complete tasks more quickly. Furthermore, about half of the states in our survey that have call centers reported that increasing timeliness of application processing and reducing administrative costs were reasons for implementing them. State officials in Florida said that the document management/imaging system allows a caseworker to retrieve an electronic case record in seconds compared to retrieving paper case files that previously took up to 24 hours, allowing caseworkers to make eligibility decisions more quickly on a case. Additionally, a call center agent can process a change in household circumstances instantly while on the phone. Caseworkers in Pennsylvania said that implementation of a change reporting call center has reduced the number of calls to caseworkers at the local assistance office, which allows them to focus on interviewing households and processing applications more quickly. Officials from four states we reviewed also said that use of a document management/imaging system has resulted in fewer lost documents, which can reduce the burden on households of having to resubmit information. According to some of the state officials and community partners we interviewed, the availability of alternative methods can be especially beneficial for working families or the elderly because it reduces barriers from transportation, child care or work responsibilities. For example, state officials in Florida explained that a working individual can complete a phone interview during their lunch break without taking time off of work to wait in line at the assistance office. In addition, state officials from three of the states we reviewed that have implemented CAP projects told us that they had experienced an increase in participation among SSI recipients and FNS and officials from two states said that households benefited from the simplified application process. In addition, state officials in Florida said that on-line services help elderly households that have designated representatives to complete the application on their behalf. For example, an elderly individual’s adult child who is the appointed designated representative but lives out-of-state can apply and recertify for food stamp benefits for their parent without traveling to Florida. However, some state and local officials and community partners we interviewed said certain types of households may have difficulty using certain alternative methods. For example, community partner representatives in two states that we reviewed said that those with limited English proficiency, elderly, immigrants, or those with mental disabilities may have difficulty using on-line applications. Local officials from Philadelphia said that the elderly and households with very low incomes may have trouble accessing computers to use on-line services and may not have someone helping them. A community partner in Florida told us that sometimes the elderly, illiterate, or those with limited English proficiency need a staff person to help them complete the on-line application. In addition, those with limited English proficiency, elderly, or those with mental disabilities may have difficulty navigating the call center phone system, according to officials from two states and community partners from another state that we reviewed. A community partner representative in Texas said that sometimes he calls the call center on behalf of the applicant because a household may have experienced difficulty or frustration in navigating the phone system. Although officials told us that alternative methods are helpful to many households, challenges from inadequate technology or staffing may limit the advantages of alternative methods. For example, state officials from Texas explained that on-line applications without electronic signature capability have limited benefit because households are still required to submit an actual signature through mail, fax, or in person after completing the on-line application. Texas state officials and community partner representatives told us that the lack of this capability limited its use and benefit to households. By contrast, Florida’s application has electronic signature capability and Florida officials reported that, as of December 2006, about 93 percent of their applications are submitted on-line. Call centers that do not have access to electronic records may not be as effective at answering callers’ questions. Officials from Washington state and federal officials from an FNS regional office view the use of a document management/imaging system as a vital part of the call center system. Florida advocates said that households have received wrong information from call center agents and attribute the complaints in part to call center agents not having access to real-time electronic case records. Florida recently expanded its document imaging system statewide, which they believe will help address these concerns. Further, while four of the five states we reviewed implemented alternative methods in part to better manage increasing numbers of participants with reduced numbers of staff, the staffing challenges certain states experienced also limited the advantages of alternative methods. For example, inadequate numbers of staff or unskilled call center staff may reduce the level of service provided and limit the advantages to households of having a call center available to them. Texas and Florida have experienced significant staff reductions at a time of increased participation, which has affected implementation of alternative methods (see figs. 10 and 11). While some states face challenges implementing alternative methods, Utah state officials said that they have successful call centers because they have implemented technology incrementally over time and because they use state caseworkers experienced in program rules. Utah state officials also reported having relatively low caseloads (180 per worker) compared with Texas (815 per worker, in 2005). To maintain program integrity while implementing alternative methods for applying and recertifying for food stamps, officials from the states we reviewed reported using a variety of strategies, some of which were in place long before implementation of the alternative access methods. Some states used finger imaging, electronic signatures, and special verification techniques to validate the identity of households using call centers or on- line services. In addition, states use databases to verify information provided by households and to follow up on discrepancies between information reported by the household and information obtained from other sources. Officials in the five states we reviewed did not believe that the use of alternative methods had increased fraud in the program. Further, despite concern that a lack of face-to-face interaction with caseworkers would lead to more households being denied benefits for procedural reasons, such as missing a scheduled interview, our limited analysis indicated no considerable fluctuations in the rate of procedural denials and officials from the states we reviewed reported taking actions to prevent them. Some states have taken several actions to prevent improper food stamp payments and fraud while implementing alternative methods. Nationally, states have systems in place to protect program integrity and the states we reviewed described how they prevent improper payments and fraud as they implement alternative access methods. Finger imaging. Nationwide, four states currently use finger imaging of food stamp applicants to prevent households from applying more than once for benefits. FNS officials commented that the agency has not concluded that finger imaging enhances program integrity and that it may have a negative effect on program access by deterring certain households from applying. Electronic signatures. FNS reported in October 2006 that nine states use electronic signatures to validate the identity of on-line users of their systems. For example, Florida’s on-line application asks applicants to click a button signifying that they are signing the application. Of the states we reviewed, Pennsylvania, Florida, and Washington have on-line services with electronic signatures. In-depth interview for high-risk cases. In Florida, a case that is considered to have a greater potential for error or fraud is flagged as a “red track” case, and it receives an in-depth interview to more fully explore eligibility factors. FNS officials commented that Florida uses an abbreviated interview with most households and that their in-depth interview for red track cases may be equivalent to the standard interview process used in other states. Special training for call center agents. Call center agents in the five states we reviewed are trained to verify callers’ identities by asking for specific personal information available in the file or in the states’ records. Pennsylvania has developed specialized interview training, including a video, for eligibility workers on conducting telephone interviews of households applying or recertifying for the Food Stamp Program. One element of the training is how to detect misinformation being provided by a household. For example, if records indicate that a household member is currently incarcerated and benefits are being claimed for that person, call center agents are trained to probe for additional information. Similarly, Utah trains telephone interviewers to request more information if needed to clarify discrepancies in the case, such as a household reporting rent payments too high to be covered by the household’s reported income. Data matching. States have used data matching systems for many years and all five states we reviewed used software either developed by the state or obtained through a third-party vendor to help with verification of household circumstances. For example, data matching software can match state food stamp caseloads against wage reporting systems and other databases to identify unreported household income and assets. Utah and Washington have developed software that automatically compares information provided by applicants and recipients with information contained in state databases, such as income and employment information. State officials told us that using this software greatly reduces the burden on caseworkers, who would otherwise have to search multiple databases one at a time. In addition to requiring case workers to access state and federal data sources to verify information, Texas contracts with a private data vendor to obtain financial and other background information on food stamp applicants and recipients. After a household has started receiving benefits, states conduct additional data matching, and their systems generate a notice to the caseworker if there is a conflict between what the household reported and information obtained from another source. The information in these notices is investigated to ensure that recipients receive the proper level of benefits. Finally, about half of all states participate in the voluntary quarterly matching of their food stamp rolls with those of other states to detect individuals receiving food stamp benefits in more than one state at a time. Food stamp officials in four of the states we reviewed said that they did not believe the use of alternative methods has increased the frequency of fraud and abuse in the program and officials in one state were unsure and collecting data to help determine whether the frequency of fraud had increased. Texas caseworkers, for example, told us they did not think telephone interviews increased fraud because they believed the verification conducted by caseworkers and the states’ data matching system was sufficient. However, we have previously reported on the risk of improper payments and fraud in the food stamp program and since there is always risk of fraud and improper payments, particularly given the high volume of cases and the complexity of the program, it is important that states include additional controls when changing their processes and that states continually assess the adequacy of those controls for preventing fraud. Some program experts have expressed concern that households would be denied for procedural reasons more frequently if they had less face-to-face interaction with caseworkers, although data have not borne out these concerns and states are taking actions to limit procedural denials. During our site visits, some officials reported examples of procedural denials resulting from alternative methods. For example, community group representatives in Florida said that some households were denied benefits because they could not get through to a call center agent to provide required verification in time. However, they also acknowledged that procedural denials due to not providing verification were frequent prior to the state implementing these methods. In addition, Texas officials said that some households were denied benefits for missing scheduled interviews when the private contractor was late in mailing notices of the interview appointments. Our limited analysis of FNS data for the five states we reviewed found no considerable fluctuations in the rate of procedural denials between fiscal years 2000 and 2005. However, a household’s failure to provide verification documents was the most common procedural reason for denial, suspension, or termination of benefits in the five states we reviewed. States we visited described their efforts to help households use alternative methods and prevent procedural denials for households that are not seen in person by case workers. Examples of actions the states we reviewed took to prevent procedural denials include: reviewing actions taken for cases that are denied, training caseworkers on preventing improper denials, routinely correcting addresses from returned mail, and developing automated system changes to prevent caseworkers from prematurely denying a case. For example, Utah trains its caseworkers to inform households of all deadlines, and their application tracking software automatically generates a list of households that have not scheduled an interview. This list is used by caseworkers to send notices to the households. Washington uses its document imaging center staff to process case actions associated with returned mail, including quickly correcting addresses. Over the last several years and for a variety of reasons, many states have changed their food stamp certification and recertification processes to enable households to make fewer visits to the local assistance office. Given our findings, it is important for states to consider the needs of all types of households when developing alternative ways of accessing food stamp benefits. Despite making major changes in their systems, FNS and the states have little information on the effects of the alternative methods on the Food Stamp Program, including what factors contribute to successful implementation, whether these methods are improving access to benefits for target groups, and how best to ensure program integrity. Without up-to-date information about what methods states are using and the factors that contribute to successful implementation of alternative methods, states and the federal government most likely will continue to invest in large-scale changes to their certification and recertification processes without knowing what works and in what contexts. Although FNS is beginning to study state efforts in this regard, these studies are not designed to systematically evaluate whether specific methods contributed to achieving positive outcomes. In addition, FNS has not thoroughly analyzed the data received from states implementing waivers of the face- to-face interview to determine, for example, whether it should allow states to use telephone interviews in lieu of face-to-face interviews for all types of households without a waiver. Further, while FNS is using its Website to disseminate information about promising practices, the information available is not up-to-date, making it difficult to easily locate current information about specific practices. Enhancing the research, collection and dissemination of promising practices could be an important resource for states that want to provide households effective alternatives to visiting local assistance offices to receive food stamp benefits. To improve USDA’s ability to assess the effectiveness of its funded efforts, we recommend that the Secretary of Agriculture take the following actions: direct FNS and the Economic Research Service to work together to enhance their research agendas to include projects that would complement ongoing research efforts and determine the effect of alternative methods on program access, decision accuracy, and administrative costs. Such projects would reliably identify the alternative methods that are effective and the factors that contribute to their success; and direct FNS to conduct analyses of data received from states implementing waivers or demonstration projects waiving the face-to- face interview and require states implementing waivers or demonstration projects to collect and report data that would facilitate such analyses. Such analyses would identify the effect of the waivers on outcomes such as payment accuracy and could help determine whether the use of the waiver should be further expanded or inform whether regulations should be changed to allow telephone interviews for all households without documenting hardship. In addition, we recommend that the Secretary of Agriculture help states implement alternative methods to provide access to the Food Stamp Program by directing FNS to disseminate and regularly update information on practices states are using to implement alternative access methods to the traditional application and recertification process. The information would not be merely a listing of practices attempted, but would include details on what factors or contexts seemed to make a particular practice successful and what factors may have reduced its effectiveness. We provided a draft of this report to the U.S. Department of Agriculture for review and comment. We met with FNS and ERS officials on April 16, 2007, to obtain their comments. In general, the officials agreed with our findings, conclusions, and recommendations. They discussed the complexity and variability of state modernization efforts and the related challenges of researching the effects of these efforts. For example, policy changes, organizational restructuring, and the engagement of community organizations in the application process may occur simultaneously with implementation of alternative methods and play a significant role in state and client experiences. Having multiple interrelated factors creates challenges for researching the effects of modernization efforts. Nonetheless, the officials highlighted steps the agency is taking to monitor and evaluate state implementation of alternative access methods. First, the officials commented that as modernization evolves, FNS is using its administrative reporting system to consistently and routinely track changes in state program performance in the areas of application timeliness, food stamp participation by subgroups, payment accuracy, and administrative costs. Second, they stated that the two related FNS-funded studies currently underway will be comparing performance data from the case study states with data from similar states; however, this analysis will not determine whether certain modernization efforts caused changes in performance. Third, they stated that FNS plans to analyze data they are collecting from states as part of the administrative waiver process to determine the effect of telephone interviews on payment accuracy. Finally, ERS officials noted that Food Stamp Program access is an area in which the agency continues to solicit research from the private sector as well as other government agencies and that ERS makes data available to support these research efforts. FNS and ERS also provided us with technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Agriculture, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or nilsens@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To understand what alternatives states are using to improve program access and what is known about the results of using these methods, we examined: (1) what alternative methods to the traditional application and recertification process are states using to increase program access; (2) what is known about the results of these methods, particularly on program access for target groups, decision accuracy, and administrative costs; and (3) what actions have states taken to maintain program integrity while implementing alternative methods. To address these issues, we surveyed food stamp administrators in the 50 states and the District of Columbia, conducted four state site visits (Florida, Texas, Utah, and Washington) and one set of semi-structured telephone interviews (Pennsylvania), analyzed data provided by the Food and Nutrition Service (FNS) and the selected states, reviewed relevant studies, and held discussions with program stakeholders, including officials at FNS headquarters and regional offices, and representatives of advocacy organizations. We performed our work from September 2006 to March 2007 in accordance with generally accepted government auditing standards. To learn about state-level use of alternative methods to help households access the Food Stamp Program, we conducted a Web-based survey of food stamp administrators in the 50 states and the District of Columbia. The survey was conducted between December 2006 and February 2007 with 100 percent of state food stamp administrators responding. The survey included questions about the use of alternative methods to provide access to the program, including mail-in procedures, call centers, on-line services, and other technologies that support program access. In addition, we asked about the reasons for implementing these methods, whether states had conducted evaluations of the methods, what measures states used to evaluate the performance of the methods, and additional assistance needed from FNS. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pre-testing draft instruments and using a Web-based administration system. Specifically, during survey development, we pre-tested draft instruments with officials in Washington, Arizona, Utah, and Wisconsin in October and November 2006. In the pre- tests, we were generally interested in the clarity of the questions and the flow and layout of the survey. For example, we wanted to ensure definitions used in the surveys were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section was appropriate. We also used in-depth interviewing techniques to evaluate the answers of pretest participants, and interviewers judged that all the respondents’ answers to the questions were based on reliable information. On the basis of the pre-tests, the Web instrument underwent some slight revisions. A second step we took to minimize nonsampling errors was using a Web-based survey. By allowing respondents to enter their responses directly into an electronic instrument, this method automatically created a record for each respondent in a data file and eliminated the need for and the errors (and costs) associated with a manual data entry process. To further minimize errors, programs used to analyze the survey data were independently verified to ensure the accuracy of this work. After the survey was closed, we made comparisons between select items from our survey data and other national-level data. We found our survey data were reasonably consistent with the other data set. On the basis of our comparisons, we believe our survey data are sufficient for the purposes of our work. We conducted four site visits (Florida, Texas, Utah, and Washington) and one set of semi-structured telephone interviews (Pennsylvania). We selected states that have at least one FNS-approved waiver of the face-to- face interview requirement and reflect some variation in state participation rates. We also considered recommendations from FNS officials, advocacy group representatives, or researchers. We made in- depth reviews for each state we selected. We interviewed state officials administering and developing policy for the Food Stamp Program, local officials in the assistance offices and call centers where services are provided, and representatives from community-based organizations that provide food assistance. To supplement the information gathered through our site visits and in- depth reviews, we analyzed data provided by FNS for the states we reviewed. These analyses allowed us to include state trends for specific measures (Program Access Index, monthly participation, payment accuracy, administrative costs, and reasons for benefit denials) in our interviews with officials. To review the reasons for benefit denials, we used FNS’s quality control (QC) system data of negative cases used in error rate calculations. Specifically, we looked at the number and percentage of cases denied, terminated, or suspended by the recorded reason for the action in the five states we reviewed for fiscal years 2000 through 2005. Though our data allowed us to examine patterns in these areas before and after a method was implemented, we did not intend to make any statements about the effectiveness of methods implemented in the states we visited and reviewed. Instead, we were interested in gaining some insight through our interviews on how alternative methods may have affected state trends. Based on discussions with and documentation obtained from FNS officials, and interviews with FNS staff during site visits, we determined that these data are sufficiently reliable for our limited review of state trends. In addition, we selected and reviewed several studies and reports that relate to the use of alternative methods to increase food stamp program access. These studies included food stamp participation outcome evaluations that were funded by FNS and the Economic Research Service (ERS) and focused on practices aimed to improve access to the Food Stamp Program. To identify the selected studies, we conducted library and Internet searches for research published on food stamp program access since 1990, interviewed agency officials to identify completed and ongoing studies on program access, and reviewed bibliographies that focused on program access concerns. For each selected study, we determined whether the study’s findings were generally reliable. Two GAO social science analysts evaluated the methodological soundness of the studies, and the validity of the results and conclusions that were drawn. The studies we selected and reviewed include: U.S. Department of Agriculture, Economic Research Service, Food Stamp Program Access Study: Final Report, by Bartlett, S., N. Burstein, and W. Hamilton, Abt Associates Inc. (Washington, D.C.: November 2004). U.S. Department of Agriculture, Economic Research Service, Evaluation of the USDA Elderly Nutrition Demonstrations, by Cody, S. and J. Ohls, Mathematica Policy Research, Inc. (Washington, D.C.: May 2005). U.S. Department of Agriculture, Food and Nutrition Services, Office of Analysis, Nutrition and Evaluation, Evaluation of Food Stamp Research Grants to Improve Access Through New Technology and Partnerships, by Sheila Zedlewski, David Wittenburg, Carolyn O’Brien, Robin Koralek, Sandra Nelson, and Gretchen Rowe. (Alexandria, Va.: September 2005). U.S. Department of Agriculture, Food and Consumer Service, Evaluation of SSI/FSP Joint Processing Alternatives Demonstration, by Carol Boussy, Russell H. Jackson, and Nancy Wemmerus. (Alexandria, Va: January 2000. Combined Application Project Evaluations submitted to FNS by seven states: Florida, Massachusetts, Mississippi, North Carolina, South Carolina, Texas, and Washington. Heather McCallum Hahn, Assistant Director, Cathy Roark, Analyst-in- Charge, Kevin Jackson, Alison Martin, Daniel Schwimer, Gretchen Snoey, Rachael Valliere and Jill Yost made significant contributions to this report. Food Stamp Program: FNS Could Improve Guidance and Monitoring to Help Ensure Appropriate Use of Noncash Categorical Eligibility. GAO-07-465. Washington, D.C.: March 28, 2007. Food Stamp Program: Payment Errors and Trafficking Have Declined despite Increased Program Participation. GAO-07-422T. January 31, 2007. Food Stamp Trafficking: FNS Could Enhance Program Integrity by Better Targeting Stores Likely to Traffic and Increasing Penalties. GAO-07-53. Washington, D.C.: October 13, 2006. Improper Payments: Federal and State Coordination Needed to Report National Improper Payment Estimates on Federal Programs. GAO-06-347. Washington, D.C.: April 14, 2006. Food Stamp Program: States Have Made Progress Reducing Payment Errors, and Further Challenges Remain. GAO-05-245. Washington, D.C.: May 5, 2005. Food Stamp Program: Farm Bill Options Ease Administrative Burden, but Opportunities Exist to Streamline Participant Reporting Rules among Programs. GAO-04-916. Washington, D.C.: September 16, 2004. Food Stamp Program: Steps Have Been Taken to Increase Participation of Working Families, but Better Tracking of Efforts Is Needed. GAO-04-346. Washington, D.C.: March 5, 2004. Financial Management: Coordinated Approach Needed to Address the Government’s Improper Payments Problems. GAO-02-749. Washington, D.C.: August 9, 2002. Food Stamp Program: States’ Use of Options and Waivers to Improve Program Administration and Promote Access. GAO-02-409. Washington, D.C.: February 22, 2002. Executive Guide: Strategies to Manage Improper Payments: Learning from Public and Private Sector Organizations. GAO-02-69G. Washington, D.C.: October 2001. Food Stamp Program: States Seek to Reduce Payment Errors and Program Complexity. GAO-01-272. Washington D.C.: January 19, 2001. Food Stamp Program: Better Use of Electronic Data Could Result in Disqualifying More Recipients Who Traffic Benefits. GAO/RCED-00-61. Washington D.C.: March 7, 2000. Food Assistance: Reducing the Trafficking of Food Stamp Benefits. GAO/T-RCED-00-250. Washington D.C.: July 19, 2000. Food Stamp Program: Information on Trafficking Food Stamp Benefits. GAO/RCED-98-77. Washington D.C.: March 26, 1998.
One in 12 Americans participates in the federal Food Stamp Program, administered by the Food and Nutrition Service (FNS). States have begun offering individuals alternatives to visiting the local assistance office to apply for and maintain benefits, such as mail-in procedures, call centers, and on-line services. GAO was asked to examine: (1) what alternative methods states are using to increase program access; (2) what is known about the results of these methods, particularly on program access for target groups, decision accuracy, and administrative costs; and (3) what actions states have taken to maintain program integrity while implementing alternative methods. GAO surveyed state food stamp administrators, reviewed five states in depth, analyzed FNS data and reports, and interviewed program officials and stakeholders. All states use mail and about half of states use or have begun developing on-line services and call centers to provide access to the food stamp program. Almost all states allow households to submit applications, report changes, and submit recertifications through the mail, and 26 states have implemented or are developing systems for households to perform these tasks on-line. Almost half of the states are using or developing call centers and states also are allowing households to participate in telephone interviews instead of an in-office interview. States have taken a variety of actions to help households use on-line services and call centers, such as sending informational mailings, holding community meetings, and using community partners. Insufficient information is available to determine the results of using alternative methods. Few evaluations have been conducted identifying the effect of alternative methods on program access, decision accuracy, or administrative costs. Evaluating the effectiveness of alternative methods is challenging in part because limited data are available, states are using a combination of methods, and studies can be costly to conduct. Federal and state officials reported that while they believe alternative methods can help households in several ways, such as increasing flexibility and efficiency in the application process, certain types of households may have difficulty using or accessing alternative methods. In addition, technology and staffing challenges may hinder the use of alternative methods. To maintain program integrity while implementing alternative methods, the states GAO reviewed used a variety of strategies, such as using software to verify the information households submit, communicating with other states to detect fraud, or using finger imaging. Although there has been some concern that without frequent in-person interaction with caseworkers, households may not submit required documents on time and thus be denied benefits on procedural grounds ("procedural denials"), GAO's limited analysis of FNS data found no considerable fluctuations in the rate of procedural denials in the five states between fiscal years 2000 and 2005. The states GAO reviewed have instituted several approaches to prevent procedural denials.
NCI, within the National Institutes of Health (NIH), Public Health Service, Department of Health and Human Services, is the world’s largest sponsor of clinical trials in cancer treatment research. NCI spends about 20 percent of its research budget on clinical trials. Using its clinical trials network that includes cooperative groups, NCI funds therapeutic research that includes evaluating the safety and efficacy of investigational drugs in large multicenter clinical trials. NCI also sponsors therapeutic drug development through the submission of investigational new drug (IND) applications to FDA. FDA is responsible for ensuring the safety of the public in matters related to clinical research with investigational drugs. FDA regulations define the terms under which clinical research may proceed. Through the INDs, FDA reviews the experimental rationale for conducting clinical drug trials, including results of animal toxicology studies, manufacturing data, purity and stability information, and an initial plan for clinical investigation. The responsibility for monitoring the trials rests with the sponsor. Unexplained weight loss and physical deterioration of the body (cachexia) commonly accompany advanced cancer. Moreover, cachexia is associated with decreased survival. For example, data have shown that in patients with lung cancer, weight loss is associated with a 50-percent reduction in survival time. Joseph Gold, M.D., director of the Syracuse Cancer Research Institute in New York, proposed a theory to explain why cachexia commonly accompanies advanced cancer. After extensive research, Dr. Gold proposed the use of hydrazine sulfate, a chemical that interrupts the abnormal sugar metabolism associated with weight loss, to arrest and reverse cancer cachexia. In 1973, Dr. Gold reported on results of animal tests indicating that hydrazine sulfate inhibited the growth of various rodent tumors and enhanced the antitumor action of some chemotherapeutic drugs. In 1975, Dr. Gold reported the results of hydrazine sulfate’s use in cancer patients. Using reports from physicians whose advanced cancer patients were taking hydrazine sulfate, Dr. Gold noted several cases of tumor regression and subjective improvement in cancer patients treated with hydrazine sulfate. Additionally, Russian investigators have claimed some successes with hydrazine sulfate for more than 20 years. Although early clinical studies conducted in the United States found mixed results, later studies evaluating hydrazine sulfate as an anticachexia agent suggested that the drug benefited some cancer patients. In the 1980s, studies at the Harbor-University of California Los Angeles (UCLA) Medical Center indicated that adding hydrazine sulfate to a standard chemotherapy regimen improved the nutritional status and survival time of some cancer patients. Of particular interest was a randomized clinical trial—involving 65 patients with advanced, inoperable non-small-cell lung cancer—that compared chemotherapy and hydrazine sulfate with chemotherapy and placebo. Data from the study suggested that hydrazine sulfate may benefit some cancer patients. While overall survival differences between the two treatment groups were not significant, researchers found that hydrazine sulfate improved survival in a subset of study patients who began the trial in better overall condition. Given the inconclusiveness of the study, UCLA investigators believed that further trials of hydrazine sulfate were warranted to determine its effectiveness in improving survival. NCI sponsored three clinical trials that were designed to assess the effect of hydrazine sulfate on survival, weight gain, and quality of life. Two trials in patients with advanced lung cancer assessed the efficacy of hydrazine sulfate as an adjunct to chemotherapy. One of these trials, in patients with advanced lung cancer, was conducted by the Cancer and Leukemia Group B (CALGB) and led by a principal investigator at the Scripps Clinic and Research Foundation in San Diego, California. The other trial in advanced lung cancer patients, was conducted by the North Central Cancer Treatment Group (NCCTG) and led by a principal investigator at the Mayo Clinic in Rochester, Minnesota. The third trial assessed the efficacy of hydrazine sulfate as the sole medical intervention in patients with advanced colon cancer. This trial was also conducted by NCCTG and led by the same principal investigator at the Mayo Clinic. Figure 1 shows highlights of activities surrounding NCCTG’s and CALGB’s clinical trials of hydrazine sulfate. The NCI-sponsored clinical trials did not find the survival advantage observed in the earlier UCLA study. Results from the three clinical trials were published in June 1994. The data showed that hydrazine sulfate therapy does not result in any significant benefit. Specifically, in two trials involving over 500 patients with inoperable non-small-cell lung cancer, the addition of hydrazine sulfate to a standard chemotherapy regimen resulted in somewhat worse quality of life, no effect on weight gain or loss, and a suggestion of decreased survival when compared with placebo. In the trial evaluating the use of hydrazine sulfate as the sole therapeutic intervention in 127 patients with metastatic colon cancer, survival time for patients receiving hydrazine sulfate was decreased compared with patients given placebo. Criticisms regarding the design of the three clinical trials arose in the media. Proponents of hydrazine sulfate therapy alleged that NCI compromised the trials by permitting study patients to ingest agents that they believe are incompatible with hydrazine sulfate. Some proponents believed that the concurrent use of tranquilizers, barbiturates, or alcohol with hydrazine sulfate would nullify the therapeutic effect of hydrazine sulfate and cause toxicity in patients. They based their beliefs on Russian and unpublished animal studies as well as some pharmacological data that they said suggested that hydrazine sulfate interacts with tranquilizing agents (particularly tranquilizing agents classified as benzodiazepines), barbiturates, and alcohol. NCI rejected concerns that the concurrent use of hydrazine sulfate with tranquilizers, barbiturates, or alcohol would nullify the therapeutic effect of hydrazine sulfate. NCI concluded that there was no objective evidence or published studies of humans addressing interactions between hydrazine sulfate and these alleged incompatible agents to support the concerns. NCI concluded that, at most, Russian animal data suggested that large doses of alcohol or barbiturate medications consumed with hydrazine sulfate can increase the total overall toxicity. NCI also concluded that unpublished animal data did not support the hypothesis that the short-term use of tranquilizing agents with hydrazine sulfate would increase toxicity or prevent clinical benefit. These conclusions were based on the assessment of NCI scientists in consultation with CALGB and NCCTG researchers. In addition, NCI officials told us that because the UCLA study did not specifically prohibit patients from taking barbiturates and tranquilizers or consuming alcohol, the NCI-sponsored confirmatory trials did not have to be any different in that regard. Nevertheless, in response to the issue of incompatibility, NCCTG investigators (at the Mayo Clinic) prohibited patients from taking barbiturates or consuming alcohol. Furthermore, patients were prohibited from taking tranquilizing agents, except for antiemetic purposes. The NCCTG principal investigator told us that he felt it would have been unethical to perform either of the Mayo clinical trials without allowing the use of antiemetic drugs, including drugs otherwise considered to be tranquilizers, by patients experiencing nausea or vomiting. CALGB investigators, however, decided that it was more important to replicate the UCLA trial than attempt to address concerns of incompatibility by prohibiting the use of tranquilizers, barbiturates, and alcohol. Because the NCI-sponsored trials were designed to confirm the improved survival observed in the UCLA trial, CALGB investigators believed they should use essentially similar methods to those used in the earlier UCLA trial. Published reports of the three trials did not disclose the extent of tranquilizer use among study patients. In examining research records, however, we found that patients in all three NCI-sponsored clinical trials of hydrazine sulfate were prescribed tranquilizers under varying conditions. In the NCCTG and CALGB clinical trials in non-small-cell lung cancer, virtually all patients received a variety of antiemetic drugs, particularly tranquilizing agents, for the short-term relief of chemotherapy-induced vomiting. Our review of over 50 percent of CALGB’s standard research forms revealed that patients with advanced lung cancer routinely received benzodiazepine and phenothiazine tranquilizing agents to relieve the nausea and vomiting associated with chemotherapy. Although CALGB investigators decided not to collect data on concurrent medications on their standardized research forms, some research associates voluntarily provided information on antiemetic usage. Data on the use of antiemetic medications for about half of the patients in our review were recorded on the research forms. Our analysis of research forms listing antiemetic medications revealed that 88 percent of the patients received benzodiazepine and 71 percent received phenothiazine tranquilizing agents. Generally, it appeared that most patients were prescribed tranquilizing medications for short-term emetic relief. In several instances, however, patients were prescribed tranquilizing agents on an “as needed” continual basis. We also found one instance where a patient was prescribed a barbiturate. At our request, NCCTG and CALGB research associates reviewed research forms, medical records, or both for patients enrolled in their lung cancer trials to collect data on the concurrent use of antiemetic and barbiturate medications. Table 1 shows the number of patients receiving hydrazine sulfate and various antiemetic medications. Investigators said it was necessary to prescribe antiemetic medications, including tranquilizing agents, to patients in all three clinical trials. In the two clinical trials involving patients with advanced lung cancer, patients received chemotherapy in addition to hydrazine sulfate or placebo. Because the chemotherapeutic regimen used to treat advanced lung cancer induces severe nausea and vomiting in almost all patients, NCCTG and CALGB investigators did not deem it feasible or ethical to administer chemotherapy without the concurrent use of antiemetic drugs. During the time when NCCTG and CALGB were conducting their trials, the most effective antiemetic regimens available involved the use of tranquilizing agents. Accordingly, many study patients in both clinical trials were prescribed phenothiazines and benzodiazepines. Although patients in NCCTG’s colon cancer trial were not undergoing chemotherapy, some patients with advanced colon cancer experience nausea and vomiting associated with their disease. NCCTG and CALGB investigators told us that they would not deny standard medical care to control nausea and vomiting in patients who were dying from cancer. Also, patients enrolled in the UCLA clinical trial reportedly received tranquilizing agents while taking hydrazine sulfate. Medical records for 40 study patients treated at Harbor-UCLA Medical Center indicated that 22 patients received hydrazine sulfate and chemotherapy. In addition, these patients received tranquilizing agents to control their chemotherapy-induced vomiting. Specifically, patients who received hydrazine sulfate also received a total of 16 doses of benzodiazepines and 20 doses of phenothiazines. Other possible uses of tranquilizing agents and barbiturates outside of chemotherapy treatment as well as possible alcohol use are not known. Analyses of data from NCI-sponsored clinical trials found no evidence of adverse effects on survival associated with hydrazine sulfate and the use of tranquilizing agents as antiemetics and barbiturates. Researchers at the Mayo and Scripps clinics retrospectively analyzed clinical trial data in an attempt to address the issue of incompatibility raised by hydrazine sulfate proponents. Their analyses suggested that the concurrent use of hydrazine sulfate with tranquilizing agents or barbiturates did not adversely affect the survival of lung cancer patients enrolled in the hydrazine sulfate trials. Also, their post-trial analyses did not change the conclusions originally drawn from the clinical trials: There was no benefit for patients who received hydrazine sulfate compared with those who received placebo. Because patients who entered later in NCCTG’s lung cancer trial did not receive benzodiazepine tranquilizing agents as antiemetics, NCCTG investigators were able to retrospectively compare the clinical outcomes of patients who received benzodiazepines with those of patients who did not. The data showed no statistically significant differences in survival time between patients who received hydrazine sulfate and a benzodiazepine tranquilizer as an antiemetic and patients who received hydrazine sulfate and new non-benzodiazepine antiemetics. Furthermore, analyses showed no statistically significant differences in terms of time to disease progression for patients who received hydrazine sulfate and a benzodiazepine tranquilizer compared with those who did not. CALGB researchers also looked retrospectively at this incompatibility issue. Beginning in January 1995, CALGB conducted a retrospective review of primary medical records and documented the medications that were used by patients enrolled in its clinical trial of hydrazine sulfate. On June 5, 1995, we received the results of CALGB’s examination of the effect of benzodiazepines, barbiturates, or phenothiazines on patient survival. The data showed no statistically significant differences in survival between patients who received hydrazine sulfate and barbiturates or benzodiazepines or phenothiazines and patients who received hydrazine sulfate but none of these allegedly incompatible agents. Furthermore, the data also showed no statistically significant differences in survival between patients who received hydrazine sulfate and barbiturates or benzodiazepines or phenothiazines and patients who received placebo and any of these agents. FDA handled the issue of possible incompatibility differently in approving the use of hydrazine sulfate by individual physicians than it did in approving NCI’s sponsored clinical trials. FDA recommended that NCI-sponsored investigators monitor study patients to detect possible interactions between hydrazine sulfate and possible incompatible agents. However, while NCI was conducting its clinical trials, FDA was cautioning other physicians to avoid possible incompatible agents when administering hydrazine sulfate. In reviewing NCI’s IND applications to conduct clinical trials of hydrazine sulfate, FDA raised safety concerns to NCI regarding hydrazine sulfate’s interactions with other drugs, including tranquilizing agents. In his review of NCI’s IND, the FDA medical officer stated, “The following drugs are interdicted, due to known interactions: ethanol , barbiturates, and tranquilizers.” This was followed by a recommendation that NCI outline all precautions to be taken by study investigators “to fully explore the neurotoxic potential of hydrazine.” NCI complied. FDA took a more conservative view of the use of possible incompatible agents with hydrazine sulfate under its compassionate use program. Before completion of NCI’s sponsored clinical trials, FDA approved more than 70 applications permitting the compassionate use of hydrazine sulfate. Because of publicity given to hydrazine sulfate, FDA received many requests from individual physicians for approval to use hydrazine sulfate on a case-by-case “compassionate” basis on the chance that patients with no other available effective therapy might benefit. A central nervous system depressant effect associated with hydrazine sulfate consistently prompted FDA to caution patients regarding the use of hydrazine sulfate with any potential sedative agent. In its approvals, FDA staff requested that physicians caution their patients to avoid tranquilizers, barbiturates, and alcohol while taking hydrazine sulfate. FDA officials told us that the reason for this instruction was that these physicians were not trained clinical investigators and, under the circumstances, would be less likely to recognize adverse reactions from interactions between hydrazine sulfate and possible incompatible agents. NCI contributed to the subsequent controversy surrounding these trials by not requiring better data collection and analysis of this issue. Although NCI officials were aware of the concerns surrounding the use of allegedly incompatible agents with hydrazine sulfate, they did not believe it was necessary to maintain research records during its trial regarding concurrent medications and possible alcohol use. NCI and CALGB documents, however, stated that all data, including concurrent medications taken by study patients, would be recorded on standardized research forms. “ll concurrent medications were well documented in the Cancer and Leukemia Group B (CALGB) study (a routine component of clinical trials data collection) so that any differences in study outcomes could be reviewed from the perspective of these potential ‘incompatibles’.” Despite these assurances, CALGB did not uniformly collect data on the use of concurrent medications, including tranquilizing agents and barbiturates, and possible alcohol use. Furthermore, in a published article describing the results of the clinical trial, CALGB investigators incorrectly reported that data on the use of concurrent medications were recorded on standardized research forms. CALGB investigators should have accurately reported their data collection efforts. In addition, NCI should have ensured that CALGB investigators prospectively collected data on concurrent medications and alcohol use on research forms to permit investigators to analyze trial data to determine the possible effects of these agents on patients taking hydrazine sulfate. A paper presenting the final results of the CALGB clinical trial did not clearly describe the use of tranquilizing agents by study patients. Authored by the principal investigator for the trial, this scientific paper did not accurately reflect the widespread use of tranquilizing agents in the CALGB lung cancer trial. In the published paper, the investigator wrote that “no patients received barbiturates and virtually no patients received phenothiazine-type tranquilizers, with the exception of prochlorperazine . . ., which was used as a short-term antiemetic.” Data from the medical records, however, showed that phenothiazines, including prochlorperazine, were prescribed to 80 percent of study patients. In addition, over 88 percent of study patients were prescribed benzodiazepines. Medical records also showed that approximately 5 percent of study patients were treated with barbiturates. The principal investigator told us that he used data submitted by some research associates to form his “impressions” of concurrent medication usage. Because CALGB did not routinely collect data on concurrent medications, however, the data used to support his impressions are not an accurate and complete reflection of information contained in the medical records. In a letter to us dated February 27, 1995, the Chairman of the CALGB cooperative group said the principal investigator would prepare a letter to the Journal of Clinical Oncology correcting his statement regarding study patients’ use of barbiturates. The Chairman told us, however, that he believed the description of tranquilizer use was accurate. He based his assessment on, first, the fact that most medical records did not indicate that phenothiazines were prescribed for long-term use as tranquilizers. Second, the tranquilizing agents, phenothiazines and benzodiazepines, were interchangeable in the investigator’s description of their use as short-term antiemetics. Accordingly, he concluded that the principal investigator was justified in stating that “virtually no patients received phenothiazine-type tranquilizers.” We disagree with the Chairman in this regard. We believe the investigator erred in not reporting the widespread use of benzodiazepine tranquilizing agents. In June 1995, the Journal of Clinical Oncology published a letter to the editor from CALGB correcting and clarifying CALGB’s published results. The letter corrected information on the use of barbiturates during CALGB’s clinical trial. The letter also clarified that in addition to the use of a phenothiazine tranquilizing agent as an antiemetic, many patients received a benzodiazepine antiemetic. In three large, randomized, placebo-controlled clinical trials sponsored by NCI, hydrazine sulfate was ineffective in extending the survival time for certain cancer patients. The developer of hydrazine sulfate therapy has suggested that the trials were compromised because investigators permitted some study patients to take agents that are possibly incompatible with hydrazine sulfate. We confirmed that all three trials permitted some use of tranquilizing agents to varying degrees and one trial permitted the use of barbiturates and alcohol. Specifically, many patients received short-term dosages of tranquilizers for antiemetic purposes. Retrospective analyses, however, found no evidence that the use of allegedly incompatible agents adversely affected NCI’s clinical trial results. Although our work did not support the allegation that the studies were flawed, NCI should have made sure that complete and accurate records were kept during CALGB’s clinical trial regarding concurrent medications and possible alcohol use. Furthermore, this issue should have been analyzed on a more timely basis in the NCCTG and CALGB clinical trials, and the published results of CALGB’s trial should have been accurate with regard to tranquilizer use. In commenting on a draft of this report, the Public Health Service agreed with the report’s main conclusion that there is no evidence to support the allegation that the three trials sponsored by NCI were flawed. In addition, retrospective analyses suggested that the use of tranquilizers as antiemetic agents, barbiturates, or alcohol by patients receiving hydrazine sulfate did not produce greater toxicities or interfere with hydrazine sulfate’s alleged benefits. (See app. II for a copy of the Public Health Service’s comments.) The Public Health Service did not agree, however, that either NCI or the clinical investigators were remiss for not ensuring that concurrent medications were recorded on research forms. NCI and CALGB documents provided that data on concurrent medications would be recorded on research forms. As noted previously, NCI staff wrote to media representatives that all concurrent medications were well documented as a routine part of the trial’s data collection so that any differences in outcomes could be analyzed in terms of the allegedly incompatible agents. Although CALGB informed NCI that the use of concurrent medications would be captured on the patient research forms in accordance with the research plan, CALGB investigators did not uniformly record this information on the forms as originally intended. Under the terms of the cooperative agreement that provided funding for CALGB’s clinical trial, it was the responsibility of CALGB to record such data. NCI officials told us that the agency has specific expectations with respect to cooperative group performance and it is the grantee’s responsibility to successfully accomplish these. We believe, however, that NCI, as the funding agency, has the oversight responsibility for ensuring that their expectations are met. Furthermore, CALGB should have complied more completely with its proposed plan for data recording. The Public Health Service also does not believe that NCCTG and CALGB clinical investigators should be criticized for not having analyzed data on concurrent medications promptly. The issue of incompatibility was consistently part of the public controversy surrounding the NCI-sponsored clinical trials of hydrazine sulfate. Therefore, we believe that NCI was remiss as were NCCTG and CALGB investigators for not settling the controversy by promptly analyzing data on the impact of specific medications on the effects of hydrazine sulfate. The Public Health Service agreed that the initial published article describing the findings of CALGB’s study was not accurate with respect to the use of tranquilizing agents as antiemetics and barbiturates. NCI criticized this lapse and ensured that a letter from the CALGB investigator was published that provided more complete and accurate information. The Public Health Service also provided technical comments which have been incorporated where appropriate, in our report. We are sending copies of this report to the Secretary of Health and Human Services and the Director of NCI; the Commissioner of Food and Drugs; and interested congressional committees. Copies will also be made available to others upon request. Please call me at (202) 512-7119 if you or your staff have any questions. Other major contributors to this report include Barry D. Tice, Assistant Director, (202) 512-4552, and Gloria E. Taylor. To obtain information for this report, we reviewed NCI’s policy guidance for conducting clinical trials of investigational agents, agency memorandums documenting protocol development for the hydrazine sulfate clinical trials, and related correspondence. We also discussed the conduct of these trials with NCI officials, cooperative group representatives and investigators, FDA officials, officials in the Office of Research Integrity, and proponents of hydrazine sulfate to obtain their perspectives on the issues involved. We performed an extensive literature search on hydrazine sulfate as well as topics related to cancer research, the conduct of clinical trials, approaches to chemotherapy treatment, and drugs to control chemotherapy-induced vomiting. In addition, we discussed the issue of incompatibility with a leading Russian researcher and viewed several hours of a taped interview with senior Russian oncologists. We also discussed the interpretation of animal data with experts in pharmacology. In our examination of the extent to which barbiturates and tranquilizers were used during the clinical trials, we reviewed research records maintained by the data management and statistical centers for each cooperative group. For the CALGB clinical trial, we visited the cooperative group’s Data Management Center located at Duke University. We randomly selected research records for 137 of 291 study patients for review. For the NCCTG clinical trial, we visited the Data Management Center for the cooperative group at the Mayo Clinic. Before our arrival, NCCTG research associates had compiled a list of antiemetic medications administered to each study patient. We randomly selected 15 percent of 116 study patients’ research records to verify the accuracy of NCCTG’s data collection efforts. We conducted our work from July 1994 to April 1995 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the National Cancer Institute (NCI)-sponsored clinical trials of the anticancer drug hydrazine sulfate, focusing on: (1) NCI protocol design and data management procedures; (2) how NCI and the trials' investigators dealt with the drug's potential incompatibility with certain agents; (3) the extent to which patients received these incompatible agents; and (4) how the investigators reported the issue. GAO found that: (1) the three large NCI-sponsored clinical trials showed that the drug did not prolong cancer patients' survival; (2) controversy surrounding the trials focused on trial participants' use of tranquilizers, barbiturates, and alcohol, which were allegedly incompatible with the drug; (3) clinical trial records showed that participants used tranquilizers under varying circumstances, particularly for relief from vomiting; (4) the investigators believed that it was unethical to withhold antiemetic medications from patients undergoing chemotherapy; (5) subsequent analyses of patients' use of concurrent medications did not invalidate NCI conclusions that the drug was ineffective; (6) the Food and Drug Administration (FDA) may have contributed to the confusion surrounding the trials due to its more conservative position on how the drug should be administered to some patients; (7) although FDA approved more than 70 applications permitting the use of hydrazine sulfate, it cautioned physicians about their patients' use of tranquilizing agents while on the drug; (8) there were lapses in recordkeeping and reporting because NCI did not require complete and accurate research records on the patients' use of tranquilizing agents during the trials; and (9) NCI-sponsored investigators only recently analyzed this issue, since published results did not accurately describe the widespread use of tranquilizers during the trials.
The military services rely on major training exercises to assess their units’ strengths and weaknesses. These exercises generally take place at combat training centers that often enable units to train in an environment that closely parallels that of actual warfare. The primary centers used for conducting major exercises include (1) the Army’s combat training centers at Fort Irwin, California; Fort Polk, Louisiana; and Hohenfels, Germany; (2) the Marine Corps’ Air Ground Combat Center at Twenty-nine Palms, California; and (3) the Air Force’s Weapons and Tactics Center at Nellis Air Force Base, Nevada. The Navy conducts major training exercises at the Naval Strike Warfare Center in Fallon, Nevada, and during worldwide fleet operations. Joint military exercises are conducted at many worldwide locations, including Germany, South Korea, Egypt, and Central America. The services use electronic instrumentation, observers, and subject matter experts to monitor and record the results of the exercises so that they can objectively document performance. Additional information on the services’ capabilities is obtained from the results of actual military operations, such as Operation Desert Storm. The services document the results of military training exercises and operations in after-action reports, which include lessons learned information. The units use such information in preparing for operations and environments associated with their assigned combat missions and in tailoring training for anticipated future missions and events. In addition, lessons learned information can help the services and the Joint Staff identify recurring weaknesses in key areas. The services and the Joint Staff can then publicize problem areas and deficiency trends, allowing others to benefit from their experiences, and institute corrective actions. According to senior military leaders, weaknesses can be addressed through changes to such areas as doctrine, training and education, tactics, leadership, and materiel. Our prior work has revealed that the Army has not effectively used lessons learned information to eliminate recurring deficiencies and change doctrine, revise tactics, or develop improved training strategies. In September 1993, we reported that, although the Army was doing a good job of identifying lessons learned, it was not achieving the maximum benefit from the lessons in terms of changed doctrine or revised training practices because it lacked procedures for prioritizing the lessons and for tracking necessary changes in training and doctrine. In July 1986, we reported that Army assessments of exercise results identified many recurring deficiencies, yet the Army had not developed a system to identify causes of and solutions to problem areas. Each service and the Joint Staff has its own program for incorporating lessons learned information into its operations. The Department of Defense (DOD) has no regulations that establish policies for or require uniformity among the services’ lessons learned programs. Since no overall guidance exists, the services have taken different approaches to developing and operating their programs. However, even though the programs differ, the services and the Joint Staff use after-action reports as the primary source of information for their programs. As an example, Army lessons learned program guidance states that lessons learned programs should (1) effectively gather, analyze, disseminate, and use lessons learned information so that actions can be taken to correct deficiencies and (2) have a means for testing or validating whether the corrective action actually resolves the deficiency. The Army’s lessons learned program started in 1986. It is run by the Center for Army Lessons Learned, which is operated by the Training and Doctrine Command. The Center has a staff of about 25 civilian and military analysts that collect observations from exercises and operations, develop trends of deficiencies, and publish the results of its analyses in bulletins and newsletters that receive widespread distribution throughout the Army. The Marine Corps’ lessons learned program started in 1989 to centralize lessons learned information and address deficiencies identified in after-action reports. The program is managed by the Marine Corps Combat Development Command and staffed with four full-time personnel. The program collects, processes, and disseminates lessons learned information throughout the Marine Corps through the use of Compact Disc—Read Only Memory (CD-ROM) technology. The Marine Corps also prepares special lessons learned reports for users on specific subjects, such as the Hurricane Andrew disaster in Florida and Operation Restore Hope in Somalia. The Navy’s lessons learned program was established in 1991 at its headquarters in Washington, D.C., and reorganized in 1993 under the direction of the newly established Naval Doctrine Command in Norfolk, Virginia. The program is run by two naval officers at the Doctrine Command and a civilian who manages the database. In addition, four of the Navy’s major commands serve as management sites for lessons learned information. These sites screen lessons learned information submitted by naval units and decide, within their respective area of authority, what information is appropriate for the Navy’s lessons learned database. The Navy’s program collects, evaluates, and disseminates lessons learned information on operational and tactical issues. Similar to the Marine Corps, the Navy distributes its lessons learned information to users through CD-ROM technology. The Air Force’s lessons learned program is the only one of the services’ programs that is decentralized. As a result, each of the Air Force’s six major operational commands is responsible for developing and managing its own lessons learned program. Air Force regulations do not require that the commands’ programs be uniform, so each command can take different approaches to operating its program. In fact, the four major commands that we visited or contacted during our review all had different lessons learned programs. The programs were designed to account for, act on, and share lessons learned information throughout the command, but not throughout the Air Force. The staffing levels for the lessons learned programs varied from one to three individuals. The Air Force also operates a limited lessons learned program at its headquarters. This program, which is staffed by two people, addresses lessons learned information that results only from the Air Force’s participation in joint exercises or operations or that affects more than one of the major commands’ missions. The Joint Staff established the Joint Center for Lessons Learned to maintain and manage a centralized database on lessons learned information from joint military operations and exercises. This information, which includes ways to improve practices or overcome problems, is disseminated periodically among the services. The Center is staffed with two military analysts and one contractor representative who are assisted, when necessary, by representatives of each military service. We reviewed the lessons learned programs in the military services and the Joint Staff to determine their effectiveness in (1) collecting all significant lessons learned information, (2) analyzing lessons to identify recurring weaknesses, (3) disseminating lessons to all potential users, and (4) implementing corrective action and validating results. To do so, we reviewed the regulations and program guidance related to the lessons learned programs within each service and the Joint Staff and the policies and systems that implement the regulations and guidance. We determined how the services and the Joint Staff obtain, document, and input lessons learned data from participants in exercises and operations into their lessons learned programs. We examined the extent to which the services and the Joint Staff analyzed lessons learned information to develop trends that could highlight recurring deficiencies. Furthermore, we examined the methods and mediums the services and the Joint Staff used to provide lessons learned information to their units and analyze outputs provided from the systems. We reviewed individual lessons learned reports contained in service and Joint Staff databases that showed the results of exercises and operations. We used this information to identify recurring deficiencies, including those that could affect the success or outcome of an exercise or operation. We also examined whether the services and the Joint Staff had remedial action systems to address deficiencies. We determined whether remedial action systems had procedures for measuring the effectiveness of solutions that were developed for deficiencies. We interviewed service officials who managed the lessons learned programs to obtain their views on the programs. We also obtained the views of Army, Air Force, and Marine Corps officials at combat training centers as they repeatedly observed the performance of large numbers of units. In addition, we interviewed the leadership of selected units that participated in large-scale training exercises to determine how they used lessons learned information to improve their performance and how they generated lessons learned from their training or operational experiences. (See app. II for a list of the military organizations we visited or contacted during our work.) Information about the Army’s lessons learned program, however, is based primarily on issues developed in our September 1993 report and limited follow-up discussions with officials at the Center for Army Lessons Learned. We performed our review from December 1993 to December 1994 in accordance with generally accepted government auditing standards. The effectiveness of the services’ lessons learned programs varies considerably. The Marine Corps, Air Force, and Navy programs provide only limited assurance that significant lessons documented in combat training center analyses, fleet exercises, and after-action reports are included in their databases. This information is extremely important since, in several instances, it discloses weaknesses displayed by many units during their most important training exercises—those conducted at the services’ combat training centers. Some of the weaknesses, if not corrected, could have serious consequences on a real battlefield. Until the services take steps to ensure that all lessons learned information is collected in their databases, units will continue to miss a significant opportunity to avoid repeating past mistakes. The cornerstone of Marine Corps’ ground unit training is the combined arms exercise conducted at the Air Ground Combat Center, which provides extensive ground training to units about once every 2 years. From this training, Marine Corps evaluators and senior leaders assigned to the Center prepare several reports, which include lessons learned information, that are not included in the Marine Corps lessons learned database. This information could be extremely valuable to commanders preparing for a future combat training center rotation, since it documents tasks that commanders did not perform well and provides examples of successful practices used by others to avoid similar problems. Moreover, the reports summarize performance trends over time and include independent observers’ assessments of the performance of weapon systems and the effectiveness of doctrine. However, these reports are not routinely included in the Marine Corps’ lessons learned database. Some of the weaknesses discussed in these reports, if not corrected, could have serious consequences on a real battlefield. For example, one report said that indirect fire was placed on or behind friendly forces. This happened because of improper coordination and a lack of situational awareness. Recurring weaknesses in breaching operations—a critical component of any large-scale maneuver operation—are other examples of significant lessons learned information that were not captured by the lessons learned database. For example, a 1993 report prepared by the Center stated that a breaching operation failed because it was not rehearsed and coordination between the engineers and maneuver units was poor. In 1994, the Center again reported that coordination problems contributed to the breaching force being committed before the support force was in place and before the enemy defending the obstacle could be suppressed. The report also said that several vehicles were destroyed during the breaching operation because they veered outside prescribed lanes and into minefields. In another 1994 lessons learned report, the Center noted a weakness in the handling of intelligence information that sometimes led to erroneous conclusions by commanders about enemy intentions and force composition. This weakness was attributed to intelligence information that was seldom analyzed and incorporated in commanders’ battle plans. As a result, commanders were often forced to react to enemy initiatives rather than be proactive in shaping the battlefield. After-action reports prepared by participating units upon completion of their combat training center exercises are another important source of lessons learned information that was not fully captured. Even though units are required to submit after-action reports that document their performance at the Center, officials that manage the lessons learned program told us that they had received only about half of the reports required. According to these officials, noncompliance with the requirement is primarily due to the lack of emphasis throughout the chain of command. To address this problem, the Commander of the Marine Corps Combat Development Command sent a message in October 1994 to all major combat and support commands that stressed the importance of the lessons learned program and the need to improve after-action reporting. A lessons learned official told us that his office also periodically sent messages to units encouraging them to send in lessons learned information, but normally they did not follow up to determine compliance. The Air Force’s Weapons and Tactics Center is the Air Forces’s premier training center for tactical fighter units. It provides aircrew and support training in a simulated combat environment. Active duty tactical fighter units rotate through the Center about every 18 months and reserve units are expected to rotate about every 2 years. Training Center observers who oversee exercises capture lessons learned information about unit performance and enter the lessons in the Center’s lessons learned database. The Center’s database includes lessons in a variety of functional areas such as command, control, and communications; planning; and friendly fire incidents or fratricide. The following are examples of lessons that could help other units avoid repeating mistakes: Airborne Warning and Control System controllers or escort fighter pilots identified friendly aircraft as enemy forces because they were not familiar with the entire group of friendly aircraft on an air strike mission. Also, because friendly forces did not respond to threat information, the controllers were unsure which aircraft had been given the threat information and therefore were not able to focus their attention on other pressing matters. Air strike missions were conducted under compressed time frames because of inadequate planning. Also, escort aircraft did not provide unrestricted airspace for aircraft delivering munitions because of a failure to communicate plans. These deficiencies resulted in the ineffective delivery of bombs and missiles on targets. Recurring incidents of fratricide resulted from multiple causes, including the aircraft’s fuel tank configuration and color, and pilots’ failure to check or select the proper modes or codes in their electronic identification equipment. Because the Air Force’s lessons learned program is decentralized, each major operational command manages its own lessons learned program. Similar to the Marine Corps, Air Force lessons learned managers at one major command we visited were not successful in obtaining after-action reports covering units’ participation in various exercises and operations. For example, the command collected only five lessons learned from the numerous exercises and operations conducted by its subordinate units during 1991. Command personnel told us that this happened because units did not have the proper software for collecting lessons learned information. Also, one of the command’s subordinate composite air wings did not know that the command had a lessons learned program or that it was supposed to submit lessons learned information to the command. The wing, however, maintained a lessons learned database that contained significant information related to the operations of its composite wing. This information might have been useful to other Air Force composite wings with similar operations. The database included the following information: Mission commanders should use F-15E and F-16 fighter aircraft to protect B-1 and B-52 bombers after they leave the target area, and the fighters should have sufficient fuel to cover slow-moving bombers. Radar systems identified friendly aircraft as hostile, thus causing other aircraft to assume that friendly aircraft were threats and disrupting the air strike mission of a group of friendly aircraft. KC-135 tanker aircraft spacing should allow sufficient time for the first tanker to become airborne before the second tanker releases its brakes for takeoff. This margin of time would allow the first tanker to abort its mission, if necessary, without causing the second aircraft to abort. Conversely, another Air Force major command had established a process to monitor the submission of after-action reports by subordinate units. Procedures for the submission of reports were specified in command regulations and were emphasized in operations orders prepared for each individual exercise. In addition, a control center within the command monitored subordinate units’ participation in exercises and operations and followed up to ensure that required reports were submitted. The Navy’s lessons learned program does not collect all of the significant lessons learned information that is recorded during fleet exercises. Units record observations about their performance during these exercises in lessons learned reports. Also, ship commanding officers, training instructors, and key exercise leaders discuss units’ performance in after-action debriefings. Lessons learned observations are submitted through the chain of command to the Navy’s lessons learned database. However, reports of after-action debriefings are not entered into the lessons learned database. Reports of after-action debriefings document units’ performance in areas such as air and surface warfare and weapons usage. Naval fleet personnel told us that because these are performance debriefings, the results are not entered into lessons learned database. Debrief participants also can pass on the performance results to others under their command. Army lessons learned information is collected from a variety of sources, including after-action reports and evaluations from independent observers. The information is widely distributed and is readily available to Army units. Additionally, information from training center exercises is published periodically by the Center for Army Lessons Learned to keep units informed of observations and trends. For example, in March 1994, the Center reported the following observations on tactical deployments: Units routinely deploy without necessary intelligence field manuals and adequate quantities of materials needed to secure tactical equipment during shipping. Leaders do not plan for medical treatment and evacuation in all phases of a deployment. Communications capability is not introduced early enough in the deployment to ensure mission success. The regional commanders in chief do not use independent observers to document units’ performance during joint exercises. The units are required, however, to submit after-action reports to the Joint Staff upon completion of an exercise. To ensure that it receives after-action reports for all major exercises, the Joint Staff tracks the reports received against quarterly schedules of military exercises to identify any missing reports. Until the services take steps to ensure that all significant lessons learned information is included in their databases, they will not realize the full potential of these assessments to make necessary changes in doctrine, tactics, training, or materiel. As a result, units are likely to continue to miss an opportunity to avoid repeating past mistakes, many of which could have serious consequences on a real battlefield. We recommend that the Secretary of Defense direct the Secretaries of the Navy and the Air Force to establish controls to ensure that all significant lessons learned information collected from combat training centers, fleet exercises, and other major training exercises are recorded in the services’ lessons learned databases. DOD generally agreed with our recommendation. According to DOD, the Marine Corps plans to collect trend information on unit performance at its combat training center and will include this information in its lessons learned database. The Navy has taken steps through such means as fleet operational orders, awareness messages, and increased training to ensure that lessons learned from major naval exercises are recorded in its database. DOD said that the Air Force records most lessons learned from combat training center exercises in a lessons learned database that is maintained at the combat training center. Because this database is not currently available to other Air Force major commands, we believe that the Air Force is missing an important opportunity to share these lessons learned with system users throughout the Air Force. The services and the regional commanders in chief continue to repeat mistakes during military operations and major training exercises. For example, a recent Air Force lessons learned report stated that almost every problem occurring during Operation Restore Hope had been documented in a lessons learned report on previous exercises or contingencies. However, the Army is the only service that analyzes lessons learned information to identify recurring weaknesses. As a result, the other services and the Joint Staff cannot be assured that significant problems are identified and receive top-level management attention. Two key steps must be completed if the Marine Corps, the Air Force, the Navy, and the Joint Staff are to identify and correct their most significant recurring deficiencies. The first step is to perform trend analyses of lessons learned information, which can highlight recurring weaknesses over a period of time. The second step is to rank the various problems on the basis of their significance. Completing these steps would allow the services and the Joint Staff to focus on correcting the highest priority problems. Over a number of years, lessons learned reports in each of the services and the Joint Staff have shown that many mistakes continue to be repeated in training exercises and military operations. These mistakes fall into different categories, including communications, fratricide, battlefield planning, reconnaissance, maneuver, combat engineering, chemical threat, fire support, and combat service support. In 1993, the Marine Corps published lessons learned information that summarized about 9 years of unit performance during combined arms exercises. This information disclosed recurring deficiencies, including (1) indirect fire being placed on or behind friendly forces; (2) inadequacies in several phases of obstacle breaching operations; (3) ineffective preparation of engagement areas, which is critical to stopping or slowing an enemy advance; and (4) units’ inability to integrate supporting arms and maneuver to destroy the enemy. A lessons learned report from February 1990 also documented numerous training deficiencies that had been cited in previous reports. Significant deficiencies included ones in the areas of surveillance, target acquisition, and reconnaissance; camouflage/concealment; nuclear, biological, and chemical defense; electronic countermeasures; and communications. Many of these deficiencies are still occurring today in combat training center exercises. A major recurring weakness that has been reported over a number of years is the inadequate communication of air tasking orders. These orders provide information to aircraft in the same or other military services and are necessary to coordinate a specific operational mission within an area of operation. According to lessons learned reports, inadequate communication of air tasking orders could result in serious consequences, including friendly fire losses. Problems relating to air tasking orders were most recently reported during 1994 operations in Haiti, yet these problems were identified 4 years earlier during the Gulf War. A 1990-91 lessons learned report found that air tasking orders were inadequately transmitted among the services during Operations Desert Shield and Desert Storm. Likewise, a 1992 report said that the Air Force used a different communications system than the other services and lacked a standardized format for air tasking orders. Other recurring deficiencies were illustrated in a 1993 report on lessons learned from Operation Restore Hope in Somalia. For example, the Air Force deployed an airlift communications system, which was to assist in air mobility operations, without qualified operators and training guides. The lack of trained, qualified operators resulted in delays in communicating mission-essential information and hampered the use of an important piece of communications equipment in an area where communications were important but limited. The report stated that this deficiency had occurred in 1990 and 1991 during Operations Desert Shield and Desert Storm. “Almost every problem occurring during Operation Restore Hope has already been documented in JULLS as a result of previous exercises and contingencies. There appears to be a continuing trend of failure to fix problems already know to exist. We end up paying again to achieve the same undesirable results.” The Navy does not enter lessons learned information into its database if the lessons are similar to those that were previously reported and recorded. Therefore, Navy officials told us that, although it is difficult to identify recurring deficiencies through the lessons learned database, such problems do exist. The database showed that, at least as far back as 1989, (1) friendly force identification codes were not used properly and (2) several different air tasking order problems were experienced, including orders that contained inaccuracies regarding the capabilities of carriers and airwings, demonstrated improper planning to carry out air strikes, and went to several organizations that did not have a need for the orders. Since the establishment of the Army’s combat training centers in the 1980s, the Center for Army Lessons Learned has documented a number of recurring deficiencies in units’ performance. For example, a 1992 lessons learned report stated that the following problems continued to be repeated: (1) direct fire was not synchronized effectively; (2) reconnaissance and surveillance plans were not well coordinated, managed, or focused; (3) communications with higher headquarters were not properly planned and executed; (4) fire support plans did not support the scheme of maneuver; and (5) operations in a chemical environment were not satisfactory. Many of these same problems continue today. Regional commanders in chief have reported recurring deficiencies during training exercises and operations, including Just Cause in Panama (1989), Desert Shield and Desert Storm, and Restore Hope. According to a 1991 lessons learned report, one recurring problem was the lack of adequate training on the joint transportation planning and management system. This system schedules and manages strategic air and sea movements during peacetime and wartime. Joint Staff officials said that this problem had come up during almost every exercise since the early 1980s. Another recurring problem has been inadequate training of personnel involved in the formation and operation of a Joint Task Force headquarters. For example, the task force headquarters for Operation Restore Hope, which included personnel from all services, was formed on an ad hoc basis after deployment. According to a lessons learned report from the operation, this situation resulted in inefficient planning, confusion, and a less-than-optimal deployment. Similar problems with this issue have been reported since the late 1980s. The Marine Corps, the Air Force, the Navy, and the Joint Staff do not analyze their lessons learned information to identify trends in performance weaknesses. Accordingly, it is difficult for them to differentiate the importance of correcting some deficiencies rather than others. On the other hand, the Army analyzes lessons learned information over time, which enables it to highlight the most pressing problem areas and focus on the highest priority areas. Lessons learned program guidance does not require the Marine Corps, the Air Force, the Navy, or the Joint Staff to perform trend analyses. However, service and Joint Staff officials told us they believed trend analyses would be useful to them. Marine Corps operations personnel in several units told us that trend analyses could highlight recurring deficiencies and that knowledge of these deficiencies would be especially useful in preparing for major training exercises because their units would have a better opportunity to overcome past mistakes. In 1993, the Marine Corps Combat Development Command developed a proposal to examine recurring operational and training deficiencies. Under the proposal, the Command’s Studies and Analysis Group would develop trends based on Marine Corps units’ performance over a number of exercises. The group could then identify recurring deficiencies and recommend corrective actions. The proposal was approved by the Commander of the Combat Development Command in November 1993 but has not been implemented. As of May 1995, a group analyst said that the delay in implementing the proposal was due to resource limitations and the inability to obtain more in-depth training data from the Air Ground Combat Center. Air Force regulations do not require the major commands to develop trend analyses of lessons learned information. Nevertheless, the lessons learned program director at Air Force headquarters told us that one of the program’s most noted weaknesses was the lack of assigning priorities to performance deficiencies. According to this official, since trend analyses and prioritization are not being accomplished at the Air Force’s major commands, it is difficult for decisionmakers to differentiate the significance of problem areas. The Navy’s lessons learned database does not contain the information necessary to perform trend analyses because the system screens out duplicate or similar deficiencies. Navy fleet operations personnel told us that they seldom used lessons learned information because of the high volume of unprioritized information in the database and the time constraints associated with their day-to-day operations. Even though Joint Staff program guidance does not require trend analyses of lessons learned information, program officials said that information was available in their database to perform such analyses. However, they said that a shortage of resources precluded them from routinely analyzing the information. Although Navy, Marine Corps, Air Force, and Joint Staff officials acknowledged that trend information was not routinely analyzed to highlight recurring deficiencies, they said that officials in leadership positions gained an awareness of the most significant problems through informal means such as conferences, meetings, and exercise planning discussions. In our view, the informal approach has not worked well, as recurring deficiencies have not been resolved. Moreover, reliance on an informal approach to problem solving does not provide for program continuity as military personnel are subject to periodic reassignment. The Center for Army Lessons Learned is responsible for identifying systemic training strengths and weaknesses of units that participate in major operations and exercises. After documenting lessons learned, the Center consolidates the information and analyzes trends and deficiencies. Under an ongoing Army proposal, these performance trends are expected to provide the basis for developing a priority issue list that ranks the importance of problems affecting war-fighting capabilities. According to an Army official, the priority issue list would enable Army leaders to establish clear priorities for those problems it deems most serious, identify the participants involved and establish accountability, and estimate the resources required to resolve problems. The Army expects to have this process in place by September 1995. The Army has recently made excellent use of trend analyses. For example, the Army analyzed the extent of friendly fire incidents at its National Training Center from 1990 to 1993 and developed a corrective action plan to address this serious deficiency. Recent data shows that friendly fire-related incidents at the Center have decreased over 50 percent since 1990. Military units continue to experience recurring deficiencies in exercises and operations, even though the services and the Joint Staff have lessons learned programs. This situation is unlikely to change markedly until the services and the Joint Staff begin to make better use of the wealth of lessons learned information contained in their databases. As it is now, the lessons are of limited value to military trainers because they provide no systematic insight to recurring deficiencies. We recommend that the Secretary of Defense direct the Secretaries of the Navy and the Air Force and the Chairman of the Joint Chiefs of Staff to (1) analyze lessons learned information so that trend data can be developed to identify recurring deficiencies and (2) prioritize these recurring deficiencies so that limited resources can be concentrated on the most pressing problems. To facilitate trend analyses in the Navy, we recommend that the Secretary of Defense direct the Secretary of the Navy to modify the Navy’s lessons learned program to retain all significant lessons learned from operations and exercises. DOD agreed with our recommendations as they applied to the Navy. It said that the Navy plans to implement a process, beginning in the first quarter of fiscal year 1996, to capture and retain all significant lessons learned from operations and exercises. Moreover, the Navy will analyze and identify trends in performance weaknesses through its newly established remedial action program. However, DOD said that trend analyses in the Air Force was unnecessary because the Air Force acted on deficiencies as they were identified. While this may be true for deficiencies recorded in the lessons learned database maintained at Air Force headquarters, DOD officials acknowledged this was not the case for the lessons learned that are recorded by the major commands. Until the Air Force undertakes trend analyses that systematically identifies and highlights recurring deficiencies in the major commands, there is no assurance that significant problems will be addressed and corrected. DOD said the Joint Staff believes that trend analyses would be worthwhile, but that it is not sufficiently resourced to conduct such analyses at this time. Given the significance of the potential value that can be gained from such an analysis, for example, identifying matters that can make a difference between success or defeat on the battlefield, we believe that this is a matter that the Chairman of the Joint Chiefs of Staff should carefully review. DOD did not agree with our conclusion that the Marine Corps does not analyze lessons learned information. DOD said that the Marine Corps analyzes lessons learned information through its remedial action and combat development processes. However, these processes address only those one-time deficiencies that the Marine Corps selects for remedial action. In the absence of a systematic process to analyze the lessons learned database to identify trends, the Marine Corps may be overlooking deficiencies of a recurring nature that warrant remedial action. The Air Force does not routinely distribute lessons learned information throughout the Air Force. As a result, information from the major commands’ lessons learned databases is not reaching all potential users. The Joint Staff, the Army, the Navy, and the Marine Corps routinely distribute lessons learned information, and their users can access the information as needed. However, most of the services use this information only on a limited basis. The primary reason for this situation is that users lack the training necessary to access the high volume of information in the databases. The Air Force does not disseminate lessons learned information to its units on a routine basis because it does not have a centralized lessons learned program. Also, Air Force units only have access to lessons learned information from their own major command. Therefore, the units cannot benefit from the experiences of other Air Force units. Unit personnel told us that Air Force-wide lessons learned information would be beneficial in planning future exercises and operations. Air Force units must specifically request lessons learned information from their major commands. If the information is available, it is sent to the units in the mail. However, units do not frequently request lessons learned information. For example, one major Air Force command maintained over 4,000 lessons learned reports in its database at command headquarters, yet in 1994 it received, on average, only 1 request for information per week from its subordinate units. Command officials told us that in 1993 they had received only about 30 requests for information. The official who managed this lessons learned program acknowledged that the dissemination of information was not very good and needed to be improved. Air Force personnel in one unit stated that their major command’s database was not very useful since it was not accessible to them. Air Force lessons learned officials recognized the limitations of a decentralized lessons learned program, and they were attempting to improve access to program information. As of June 1995, the Air Force was developing a computer network that would provide access to lessons learned information throughout the Air Force. Once this capability is achieved, units within major commands throughout the Air Force should have better access to lessons learned information. One of the major commands that we visited plans to achieve this capability later in 1995. Another major command is testing the network. However, until this network becomes fully operational throughout the Air Force and is proven effective, units will continue to have limited access to important lessons learned information. Navy lessons learned information is available to over 1,000 major and intermediate-level commands, specialized operational units, and individual ships. Until recently, Navy organizations had to request that they be included on the lessons learned distribution list to receive such information. As a result, all naval units may not have been receiving the information. In early 1995, the Navy took action to ensure that all commands, units, and ships were receiving lessons learned information. The remaining services and the Joint Staff also provide access to their information. The Marine Corps distributes lessons learned information to over 500 organizations, principally units down through the battalion level. The Army periodically publishes this information in bulletins and newsletters that are sent to each Army specialty school and most other organizations throughout the Army. The Joint Staff routinely distributes lessons learned information to its major command organizations and to the other services such as the Navy, which publishes the Joint Staff database on CD-ROM along with its own lessons learned information. Regardless of the availability and widespread distribution of lessons learned information, most services have used this information only on a limited basis. The principal reason for not making greater use of the information is the lack of training in how to easily access the databases. According to Marine Corps personnel, units do not use lessons learned information because users possess limited training and knowledge on how to access information in the system or how to process available information in a timely manner. For example, a unit representative told us that he had been in a headquarters organization for over 1 year, but knew of no one who had used the lessons learned database to obtain information. An officer from this unit attributed this fact to the users’ unfamiliarity with the information in the database and lack of training on how to use CD-ROM technology. Lessons learned officials from the Marine Corps Combat Development Command recognized that users had problems with the CD-ROM technology needed to access the database and took steps in 1994 to expand training in this area. Specifically, these lessons learned officials began to regularly schedule visits to units to provide unit personnel with hands-on training on the operation of the lessons learned database and information on its benefits. The Navy’s lessons learned database contains over 4,000 unprioritized reports. Accordingly, to use the system effectively, users must possess the skills needed to access the information and identify the most pressing problems. Some Navy fleet operations personnel told us that they seldom used lessons learned information because their operating tempo was extremely high and they had not been trained to use the system to quickly access specific lessons learned information. For example, several officers with submarine backgrounds said that they relied on other mechanisms for lessons learned to identify submarine-related lessons. Some Atlantic Fleet staff officers said that they seldom used the Navy’s lessons learned database and felt no need to do so. They relied instead on more ad hoc systems to obtain lessons learned information. They specifically cited Navy message traffic, newsletters, bulletins, and discussions with their counterparts on other ships as sources of information. They also said that the lack of knowledge about the system and how to quickly access information hindered them from using the lessons learned database. The manager of the Navy’s lessons learned database acknowledged that training for fleet personnel in the use of the system could be improved. He cited personnel turnover as a principal cause for some users’ unfamiliarity with the system. Further, he said that this situation was likely to continue until training became widespread. One Air Force unit that we visited did not use lessons learned information from its major command’s database because unit personnel did not know that a lessons learned database existed at the major command. It was for this reason that personnel at this unit told us they had never requested any lessons learned information from their major command. At another Air Force unit, personnel were aware that lessons learned information was maintained at their major command; nevertheless, they had used the database very little because they lacked knowledge of the database’s detailed information and because they had no quick, ready mechanism to access or obtain this information. Unit personnel had requested lessons learned information from their major command on several occasions, and it was provided to them through the mail. However, unit officials told us that requesting and obtaining information through the mail was time-consuming. Primary users of Army lessons learned information are the Training and Doctrine Command’s 18 schools, which develop training programs for Army personnel in their military specialties and tactical units. These schools are ultimately responsible for using lessons learned information to modify training and doctrine. Even though officials at several schools told us that they used lessons learned information to develop training plans and to update doctrine, they said that they did not keep track of how training and doctrine were modified based on this information. Likewise, the leadership of several Army units said that they used lessons learned information to prepare for major training events but did not keep track of how this information was used during training. It is clear that the services are not maximizing the potential benefits of lessons learned information. For the most part, the dissemination of lessons learned information by the Joint Staff, the Army, the Navy, and the Marine Corps is adequate. The Air Force’s ongoing effort to establish a computer network that will provide access to lessons learned information throughout the Air Force could solve its dissemination problem. However, dissemination of lessons information is only the first step necessary to facilitate units’ use of the information. To better facilitate the use of lessons information, Air Force and Navy personnel need to possess skills necessary to access lessons in their services’ databases. The Marine Corps’ ongoing effort to provide unit personnel with the skills needed to access their lessons database is a step in the right direction. We recommend that the Secretary of Defense direct the Secretaries of the Navy and the Air Force to provide training to key personnel in the use of lessons learned information and the technology for accessing and reviewing this information. DOD agreed with our recommendation. DOD said that the Navy had selected a more user friendly computer program to make the Navy lessons learned database more accessible to personnel and was working to incorporate lessons learned system training into various officer and selected enlisted schools. Also, DOD said that the Air Force is planning steps to ensure that its major commands provide training in the use of the lessons learned system. Moreover, the Air Force expects to improve the distribution of lessons learned information by implementing a wide area network throughout its major commands by the end of fiscal year 1996. Effective follow-up and validation are important parts of a lessons learned program since they are the only means for ensuring that problems have been corrected and are brought to closure. However, the Navy only recently implemented a follow-up process, and the Army does not expect to have a process in place to address training and doctrinal deficiencies until September 1995. The Marine Corps, the Joint Staff, and one of the Air Force commands that we visited seemed to have visibility over the status of corrective actions. Even though most of the services and the Joint Staff have requirements to validate corrective actions, not all of them have fully implemented procedures for this purpose. An important part of a lessons learned program is a remedial action process to track and follow up on actions taken to address problems. The remedial action process generally involves identifying problems, assigning responsibility for the problems, and monitoring corrective actions taken. However, one of the services does not have a remedial action process in place to address training and doctrinal issues, and another service only recently established one. The other services’ processes vary in effectiveness. Although the Marine Corps’ lessons learned program was established in 1989, the remedial action element of the program did not become operational until 1991. A Marine Corps lessons learned program official said that corrective actions are monitored primarily through the combat development process, which is a formal process that identifies battlefield requirements and develops combat capabilities. On the basis of our review of a sample of remedial action items, we found that the Marine Corps was able to successfully track the status of corrective actions through the combat development process. The Air Force has directed each of its major commands to establish a remedial action element for its lessons learned program. However, the quality of remedial action processes in place at the major commands varies. For example, one of the commands we visited had only recently begun to systematically track corrective actions taken to address problem areas. A command official told us that before October 1994 the status of corrective actions could not be readily determined. According to the official, functional offices within the command were tasked to develop solutions to problems. However, the command had no systematic tracking system to determine the status of corrective actions. To correct this situation and improve its ability to track corrective actions, the command developed a spreadsheet to document the status of corrective actions. In contrast, another major command we visited had implemented procedures to assign responsibility for solutions and systematically track the status of corrective actions. The office responsible for solving a problem is required to provide periodic status reports to the major command. On the basis of our review of a sample of lessons learned reports, we found that the command had visibility over the development and implementation of corrective actions. The Joint Staff employs a similar remedial action process to that of the Air Force major command discussed previously. It assigns responsibility for developing solutions to problems of a joint nature to its own offices, or those within the services. These offices periodically report their progress to the Joint Staff, and the status of corrective actions is recorded as part of Joint Staff lessons learned reports. On the basis of our review of a sample of lessons learned reports, we found the Joint Staff had visibility over the status of corrective actions. The Navy did not establish a remedial action process for its lessons learned program until January 1995. Before that time, the lessons learned program was limited to providing information on operational issues for use by fleet personnel. As of May 1995, however, the Navy had not addressed any deficiencies through its remedial action process. In September 1993, the Army’s Training and Doctrine Command began developing a remedial action process that would address lessons learned pertaining to training and doctrine deficiencies that it deemed most critical. Under this process, the Army plans to establish accountability for problem resolution and monitoring progress. The Army expects the process to be in place by September 1995. Validation of corrective actions (for example, testing the effectiveness of actions taken to correct deficiencies) can ensure that recurring deficiencies have been resolved and brought to closure. Validation can be accomplished by evaluating the effectiveness of potential solutions during a training exercise. However, the Navy does not require that its lessons learned program contain a validation element. The Army also does not formally validate solutions to deficiencies. However, the Army’s proposed enhancements to its lessons learned program would recognize the benefits of validation. Specifically, the Army plans to include a validation element in its remedial action process and test solutions to deficiencies through training exercises. As stated earlier, the Army expects the remedial action process to become operational by September 1995. In contrast, the Marine Corps and the Air Force require validation. The Marine Corps requires that corrective actions be validated through its combat development process. Air Force guidance requires major commands to incorporate a validation element in their lessons learned program. However, only two of the four major commands we contacted had done so. Joint Staff guidance states that validation is necessary to ensure the effectiveness of corrective actions taken to resolve problems. However, officials said that it is left to regional commanders in chief to determine whether corrective actions will be tested in training exercises. The Joint Staff permits open items to be closed by means other than testing, such as a determination by senior officials that all corrective actions were completed and that the actions taken had solved the problem. Joint Staff officials said that insufficient staffing was the principal reason for not taking a stronger oversight role in the validation process. Without adequate follow-up and validation in remedial action processes, lessons learned programs can only be used to identify and distribute information about problems rather than to track and validate that solutions work. Until the services and the Joint Staff establish effective follow-up and validation procedures in their lessons learned programs, there will be little assurance that problems have been brought to closure and the possibility for repeating past mistakes will remain. We recommend that the Secretary of Defense direct the Secretary of the Navy to incorporate a validation process into the Navy’s lessons learned program, the Secretary of the Air Force to take actions to ensure that each of the major commands complies with existing program guidance calling for the establishment of a validation process for their lessons learned programs, and the regional commanders in chief to ensure that solutions to deficiencies are tested in joint exercises or, if this is not appropriate, validated through alternative means. DOD agreed with our first two recommendations. DOD said that, as part of its lessons learned program, the Navy had established a remedial action program working group that will validate lessons learned. It also said that the Air Force would take action to ensure that the major commands establish a validation process for their lessons learned programs. Specifically, DOD said that Air Force headquarters will increase its oversight of lessons learned programs by monitoring the minutes of remedial action plan meetings conducted by the major commands and by assessing the commands’ compliance with program guidance. A draft of this report recommended that the regional commanders in chief establish formal procedures to ensure that solutions to deficiencies are tested and validated. DOD said that this recommendation was unnecessary because current program guidance contained formal procedures to test corrective actions through the Joint Staff’s remedial action program. Although we agree that formal procedures for testing already exist, we found that commanders in chief seldom tested whether prior problems had been corrected in their exercises because (1) they were not required to do so and (2) they had insufficient time to analyze past problems before planning future exercises. We believe that testing solutions to problem areas in exercises is a vital part of assessing the capabilities of the regional commanders in chief to support national security strategies. Further, the failure to conduct such testing, when appropriate, reduces the effectiveness of collecting data on problems and, in our opinion, is a major reason contributing to recurring problems. Accordingly, we modified our recommendation to stress the importance of testing remedial actions and to recognize that, in some instances, it may be appropriate to close remedial action projects if their effectiveness can be demonstrated through alternative means.
GAO reviewed the effectiveness of the military's lessons learned programs in: (1) collecting all significant lessons learned information; (2) analyzing the information to identify recurring weaknesses; (3) disseminating the information to all potential users; and (4) implementing corrective actions and validating results. GAO found that the Marine Corps, the Air Force, and the Navy do not: (1) include all significant information from training exercises and operations in their lessons learned programs; and (2) analyze their lessons learned information to identify trends in performance weaknesses. In addition, GAO found that: (1) Marine Corps lessons learned data continue to highlight recurring deficiencies during major combined arms exercises in such areas as maneuver, fire support, engineering, chemical threat, intelligence, communications, and electronic countermeasures; (2) although the dissemination of lessons learned information is adequate, the Air Force does not make its information readily available to all potential users; (3) regardless of the availability and widespread distribution of lessons learned information, most services use this information on a limited basis because they lack the training in how to access the databases; and (4) the Army has made excellent use of trend analysis to develop a corrective action plan to address the highest priority areas.
Federal tax information (FTI)—tax returns and return information (as defined in fig. 1 below)—is kept confidential under Section 6103 of the IRC except as specifically authorized by law. Information in a form that cannot be associated with or otherwise identify, directly or indirectly, a particular taxpayer is not FTI. Section 6103 specifies what FTI can be disclosed, to whom, and for what purpose. Prior to passage of Section 6103 amendments in the Tax Reform Act in 1976, the executive branch had discretion over decisions to share taxpayer information. The 1976 amendments were written to address concerns about the potential for too much dissemination of tax information and the misuse of tax information. In general, FTI is collected and developed to administer tax law. This information, however, can be useful for other purposes, such as to detect possible noncompliance with nontax criminal laws or administer other kinds of government programs. In 1976 and since, Congress has granted some statutory exceptions to confidentiality and repealed or modified certain existing exceptions. Congress also has considered but not enacted other proposals. Congress has generally attempted to balance the expectation of taxpayer privacy with the competing policy goals of efficient use of government resources, the public health and welfare, and law enforcement. Exceptions to the general rule of confidentiality can be very narrowly prescribed. For example, for the purpose of determining creditworthiness for a federal loan, the law authorizes the IRS to disclose to a federal agency whether or not a loan applicant has a delinquent tax debt, but not more information than that. The law grants more-general disclosures of FTI to state tax agencies for the administration of state taxes. Additionally, Section 6103 authorizes disclosures for nontax administration purposes to specific entities such as certain congressional committees or the President or for specific purposes such as statistical use or determining eligibility for federal programs. Appendix I has more information on the current exceptions to FTI confidentiality, including safeguard and reporting requirements. The last comprehensive reviews of the Section 6103 framework were done more than a decade ago when the IRS Reform and Restructuring Act of 1998 required the Joint Committee on Taxation (JCT) and the Secretary of the Department of the Treasury (Treasury) to report to Congress on the scope and use of Section 6103 provisions regarding taxpayer confidentiality. In their reports, JCT and Treasury called for strict scrutiny of proposed exceptions and the circumstances when such exceptions should be made. To identify what criteria or other policy factors Congress might wish to consider when deciding whether to grant exceptions to the general rule of tax information confidentiality, we reviewed Treasury and JCT reports; congressional reports and hearing records dated between 1999 through April 2011 with references to Section 6103, including proposals enacted and not enacted; available research on the effect of tax-information confidentiality on Treasury and IRS criteria used in assessing proposals to disclose tax information for nontax purposes, the Fair Information Practices; and GAO internal guidance for recommendations on using tax data for nontax purposes. The Fair Information Practices were a key basis for this guide. First proposed in 1973 by a U.S. government advisory committee, the Fair Information Practices are now widely accepted and, with some variation, the basis of privacy laws and related policies in the United States and many countries. They are also reflected in a variety of federal-agency policy statements on information privacy. The Fair Information Practices are not precise legal requirements. Rather, they provide a framework of principles for balancing the need for privacy with other public policy interests, such as national security, law enforcement, and administrative efficiency. Included in the practices is the principle that the collection and use of data should be limited to specific purposes and that individuals should have ready means of learning about the collection and use of information about them. Appendix II has more information on the Fair Information Practices. IRS’s 1994 memorandum republished in app. B of Treasury’s 2000 report elaborates on these criteria. sector business concerns.that represent different perspectives and with expertise in tax administration, privacy, and information issues. We provided a draft of this guide to officials from Treasury and IRS for comment regarding facts and incorporated their comments as appropriate. We selected them to get views from parties We conducted our work from December 2010 to December 2011 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. This guide was developed to assist and inform decision making regarding proposed statutory exceptions to tax information confidentiality or modifications to existing exceptions. It is intended to help policymakers think about important factors and competing interests rather than to provide a single right answer or optimal decision. The guide consists of two sections of key questions for evaluating Section 6103 exception proposals, as shown in figure 2. The first section includes five threshold questions for screening proposals to address basic issues, such as whether they are adequately developed and tailored to minimize disclosure of confidential tax information. Under this framework, all of the threshold questions would be resolved with a “yes” answer before further consideration of the proposal. The second section includes six policy- factor questions that explore the proposal’s expected benefits and costs,privacy effect and safeguards, and effects on the tax system. Generally the policy questions deal with issues of magnitude, or “how much.” These questions help identify trade-offs to consider among policy factors as well as potential risks to mitigate. In summary, the questions and their answers are intended to support a determination of whether approving the confidentiality exception is the best alternative. The order of the threshold and policy questions reflects a logical sequence of important issues that should all be given careful consideration. Appendix III is a full list of the threshold and policy-factor questions and subquestions discussed throughout the body of the guide. The guide is intended both for evaluating FTI disclosure proposals having an expected tax benefit or tax-specific use as well as proposals for other types of benefits or uses not directly related to tax administration. Whether proposed uses, particularly disclosures for uses beyond tax administration, are seen as appropriate depends in part on how one views the role of tax information held by IRS. Some have advocated that FTI should be used only for tax administration and not disclosed outside of IRS. Others believe that FTI should be used as any other government information resource to achieve non-tax-related policy ends. In between these two positions is the view that FTI disclosures are acceptable in some cases but must be well justified. Regardless of one’s view, the questions in this guide are designed to support evaluating the relevant important taxpayer privacy rights, tax-system effects, and other key issues raised by proposals to disclose FTI. This guide may also assist Congress with oversight of existing exceptions to confidentiality under Section 6103. Evaluating a proposal to use tax information will likely involve considerable judgment because objective information about likely effects may not be available. Therefore when developing and assessing an FTI disclosure proposal, it would be beneficial to build in mechanisms, such as performance reporting requirements, to assess the exception after it is implemented. It would, of course, be appropriate to weigh the cost of complying with any such reporting requirements against the expected value of the reports. Threshold Question 1: Does the proposal have a clear purpose and description of how the tax information will be used? The first step in answering the threshold and policy-factor questions is to have a clear description of basic elements of the exception proposal and its intended purpose. The description should detail the following: What specific information will be disclosed? To whom will the information be disclosed? What category or categories of taxpayers will be affected? How will the information be used? What purpose will be achieved? Threshold Question 2: Does the proposal consider reasonable alternatives? Any disclosure of tax information comes with costs and creates risks. These are discussed in detail in the policy-factor questions below. In order to consider a proposal, however, it is important that such consideration be done in light of any reasonable alternatives. Assessing a proposal’s alternatives could involve a separate evaluation of the alternatives using this guide, though much of this framework is specific to FTI disclosures. One alternative for some proposals may be to obtain the information through taxpayer consent, as permitted under Section 6103(c). For example, when applying for home loans, taxpayers often consent to have income information from their tax returns disclosed to a mortgage company for verification. These disclosures are not subject to the safeguards otherwise required by Section 6103 as a condition for entities to receive tax information from IRS. However, other restrictions on the information may apply, such as the Privacy Act in the case of federal agencies receiving tax information pursuant to taxpayer consent. The Section 6103 safeguard requirements are discussed later in the safeguards policy-factor question and in appendix I. Threshold Question 3: Is the tax information accurate, complete, and current enough for the stated purpose? Any proposal to disclose and use FTI depends on the information being accurate and complete enough to use and available when it is needed. One challenge related to disclosure proposals stems from the errors that tax information can contain. Information reported to IRS by tax filers and third parties is not always accurate. IRS may also introduce errors when processing the information, for example when transcribing tax information from paper returns. IRS finds and corrects some errors in the course of its return processing and examination processes, but not all of them. Consideration should also be given to whether tax information is sufficiently complete for the stated purpose. Certain groups are underrepresented in IRS tax data because not everyone is required to file income-tax returns. For tax year 2010, single people under the age of 65 earning less than $9,350, and married people both under the age of 65 jointly earning less than $18,700, were not required to file tax returns. Also, some individuals and businesses that are supposed to file returns fail to do so. Additionally, tax information should be sufficiently current for the stated purpose when first used and for any later uses. Some FTI is only available for use outside of IRS after a lag time of more than a year. Taxpayers’ tax returns and information from third parties are generally provided to the IRS after a tax year, and the information provided may be as much as 17 months old. IRS processing then takes more time, including time to go back to tax filers and third parties to correct errors that IRS catches during processing. Taxpayers can also obtain deadline extensions of up to 6 months. For some proposed uses of FTI, these time lags could make FTI insufficiently current. A proposal’s limits on how long information can be used—to potentially include controls such as retention limits—may address any concerns about disclosed information becoming outdated for later use. Threshold Question 4: Is the tax information to be disclosed relevant and the minimum needed to achieve the stated purpose? A key consideration of any proposal is whether the information to be shared is clearly relevant to the proposed use. Only truly needed information should be disclosed, so a proposal should be tailored to ensure that each discrete element of information to be disclosed is relevant and necessary to accomplish the stated purpose. Minimizing the information disclosed to only that which is necessary can yield desirable results across several of the policy factors addressed elsewhere in this document, including reducing the costs of the disclosure (policy-factor question 2), preventing an unwarranted invasion of privacy (policy-factor question 3), or reducing the risk of unauthorized disclosure or use (policy- factor question 4). Such consideration is evident in IRS’s implementation of the privacy exception that provides for public disclosure of certain information about payment agreements between IRS and persons settling a tax debt for less than the full amount. While the name of the person entering into the agreement is disclosed, the Internal Revenue Manual requires that tax identification numbers, addresses, and certain other information be redacted. Threshold Question 5: Does the proposal address any other statutory, regulatory, or logistical issues necessary for its implementation? Will the proposal require additional legislation besides modifying section 6103 to accomplish the desired result? Does the proposal conflict with existing regulations, rules, or statutes other than Section 6103? How will any logistical or practical barriers or hindrances to implementing the proposed disclosure and use of information for the stated purpose be resolved? Changes to Section 6103 protections may not be the only changes needed to implement the proposed use of FTI. Other needed changes can be legal or logistical and may cause the proposal to be unnecessarily difficult, costly, or even impossible to implement. For example, other statutes, regulations, or practices may restrict or preclude the disclosure, access, and use of FTI for the stated purpose. Furthermore, agencies may not have systems or staffing in place to make use of the information. Also, agency practices might preclude the specified use. For example, if an agency uses contractors to do program work that would require handling of the disclosed tax information, then the proposal would have to address not only disclosure to agency employees but also to contractors. Identifying all needed statutory, work-process, and information-flow changes can help assure that all the steps necessary to implement the proposal are clearly spelled out. After an FTI disclosure proposal has been screened with the threshold questions (and each of the thresholds has been met), the proposal’s costs and benefits can be systematically assessed using the policy-factor questions below. Although they are discussed separately, policy questions may be interrelated. For example, a proposal may mean that IRS has to devote more resources to oversee FTI safeguards, which could potentially affect other aspects of tax administration (policy-factor question 6). Also, the significance of each issue discussed below likely will vary for any given FTI disclosure proposal. Policy-Factor Question 1: What are the expected benefits of the proposal to disclose tax information? What are the estimated financial benefits, if any, to be achieved by using tax data? What are the nonfinancial benefits that are expected, if any? A proposal could involve anticipated financial benefits such as saving money by reducing improper payments or making better decisions about government-backed loans. A proposal could also have nonfinancial benefits, such as improving the accuracy or reliability of government statistical information. Some proposals may include a combination of financial and nonfinancial benefits. For example, a previous FTI disclosure proposal involved disclosing FTI to the Department of Education to streamline the application process for federal financial aid for postsecondary education. This proposal was intended to reduce the Department of Education’s application-processing burden and improper payments related to inaccurate income reporting on the financial-aid application. The benefits were also expected to extend to families and schools that participated in the financial-aid programs by simplifying the application process. To the extent possible, a proposal should specify and quantify its expected benefits. Benefits may include positive effects on voluntary compliance or tax administration, which are discussed specifically in questions five and six below. Policy-Factor Question 2: What are the expected costs of obtaining and using the tax information to be disclosed? What are the estimated costs for IRS to provide the tax data? What are the estimated costs to the entity receiving the information? What are the expected costs for others affected by the tax-disclosure proposal? Policymakers need to be able to consider all of the expected financial costs associated with the proposed use of FTI. Such costs would likely include costs to IRS, such as those for compiling and transmitting the data, as well as for accounting for the disclosures and providing oversight to ensure that the recipient of the data has the necessary safeguards in place to protect taxpayer information, ensure confidentiality, and prevent misuse. Section 6103 outlines the recordkeeping, safeguarding, reporting, and IRS oversight requirements that are conditions for receiving returns and return information. Costs to IRS would also depend on whether IRS’s current processes and information systems are capable of providing the requested information or if changes are needed to implement the proposal. For the entity receiving the information, a key consideration is the cost of establishing and maintaining the previously mentioned safeguards—a condition for any entity to receive tax information. To the extent that any entities besides IRS and the recipient of the FTI are involved, costs to those entities should also be addressed. For example, if the proposal involves changes to third-party reporting by financial institutions, the cost of meeting the new requirements should be a part of the benefit/cost discussion. Other costs may include negative effects on voluntary compliance or tax administration, which are discussed specifically in questions five and six below. The benefits of privacy may be less tangible and immediate than the benefits of disclosing information to support some other purposes such as national security or law enforcement. As a result, in some cases, privacy may be at an inherent disadvantage when decision makers weigh privacy against other interests. Therefore, focused, systematic consideration of privacy is critical in assessing a proposal to disclose tax information. Safeguarding the disclosed FTI is an important aspect of protecting privacy. Policy-Factor Question 3: What is the potential effect on privacy? To what extent will the proposal adversely affect taxpayer privacy? Is the use of the information transparent and limited? Will sufficient notice and control be provided to individuals? A key consideration for any FTI disclosure proposal is how much it will adversely affect privacy. Decision makers should know how many taxpayers will be involved, how much and what type of information will be disclosed, how sensitive the information is, who outside of IRS is going see that information, and the extent to which the disclosure and use of the information may adversely affect people’s privacy or other interests. Making a determination about the extent of an adverse impact involves assessing how well a proposal conforms to the Fair Information Practices. As addressed in threshold questions 1 and 4, such an assessment is needed to ensure that the proposal has a clearly specified purpose, minimizes the amount of information to be disclosed, and limits use of the disclosed information to the specified purpose. The Fair Information Practices state that the public should be informed about privacy policies and practices and that individuals should have a ready means of learning about the use of their personal information. This is particularly important when the government is involved. The Fair Information Practices say it is important to ensure that information collected for one government function is not used indiscriminately for other, unrelated functions. Furthermore, the practices say that collection of personal information should be performed, where appropriate, with the knowledge or consent of the individual. Additionally, the practices say that it is important that there be transparency about how information is protected and that limits are placed on what information the government maintains. Because tax information is generally recognized as being collected and developed to administer tax laws, its use for other purposes needs to be transparent. It should be clear what the government is doing with FTI and what limits will be placed on how the receiving entity may use the information. It is also important that taxpayers receive notice about the use of their information. Therefore, decision makers should determine if the proposal does these things adequately. On the basis of the practices, specific questions to consider, where appropriate, include the following: Will taxpayers be given sufficient, timely notice of the FTI disclosure and use? Will taxpayers be given the opportunity to disallow the disclosure? Will sufficient procedures be in place to give taxpayers access to the disclosed information, opportunities to correct any inaccurate information, and notification of access and correction procedures? Will sufficient notice be given to affected taxpayers regarding privacy policies and practices, such as the safeguards that will be in place for disclosed information, including security, retention, and disposal? As noted earlier, the practices are not precise legal requirements; rather, they provide a framework of principles for balancing the need for privacy with other public-policy interests. Therefore the above questions—such as whether to provide taxpayers notice of, and opportunity to disallow, a disclosure—will not necessarily lead to such procedures in all cases. The privacy benefits of giving taxpayers prior notice or control over the disclosures would need to be justified in light of possible program risks or costs of such procedures. For example, it may not be appropriate to inform taxpayers that FTI about them is being used in a criminal investigation. It also may not be useful to report all disclosures to federal agencies for statistical or audit and evaluation purposes. Means already exist to implement Fair Information Practices for uses of FTI. For example, the Section 6103 legal framework itself—by which only Congress can authorize use of tax information for other purposes—is part of the controls to prevent indiscriminate use of FTI for other purposes. Moreover, openness about how FTI is used and protected is addressed through publicly available information about Section 6103 and FTI safeguards, including the content of the law itself. In addition, publications specifically about IRS privacy practices and the safeguards afforded FTI are available from IRS. IRS also provides general notices about how the information provided by taxpayers can be used. For example, in the instructions for the widely used Form 1040, IRS refers to its authority to disclose tax returns and return information to others and provides several examples of such uses. Policy-Factor Question 4: What risks of improper use or unauthorized disclosure does the proposal create and how well does the proposal address those risks? Does the proposal adequately take into account risks of unauthorized use or redisclosure associated with the disclosure? Does the proposal provide adequate safeguards to mitigate those risks? Fair Information Practices call for reasonable security safeguards against risks such as loss or unauthorized access, destruction, use, modification, or disclosure. According to the internal control standards for the federal government, no matter how well designed and operated, no system of controls can provide absolute assurance that an objective will be met—in this case, that information is perfectly safe from improper use or unauthorized disclosure—so assessment and mitigation of risk is critical to providing reasonable assurance that the disclosed FTI will be protected. Another important risk consideration involves data mining or other techniques that may allow someone to merge different sets of seemingly anonymous, aggregated data to identify specific individuals. Policymakers should consider how well the proposed use of FTI addresses the risks of improper use or unauthorized disclosure that could be created if the proposal is adopted. Section 6103 and IRS Publication 1075 specify the safeguard requirements that entities receiving tax information must have in place to prevent the information from being misused, that is, used for an Agencies that unauthorized purpose, or disclosed without authorization. receive FTI are required to maintain a permanent system of standardized records on the use and disclosure of the information, store the information in a secure area, restrict access to the tax information, and properly dispose of the information after use. To further reduce the risk of misuse or unauthorized disclosure, a proposal may require that the tax information be returned to IRS or destroyed after it is no longer useful or after a set period. Safeguards are subject to periodic IRS review. The expected benefits and costs associated with an FTI disclosure proposal were discussed previously under the policy-factor questions concerning expected benefits and costs. The potential effects on voluntary compliance and tax administration, discussed here, may represent either benefits or costs. Because continued willingness of taxpayers to provide their personal information is critical to voluntary compliance and tax administration, any proposed disclosure’s effect on such willingness warrants specific consideration. Section 6103(p)(4). Criminal and civil sanctions that apply to the unauthorized disclosure and inspection of tax information are under separate sections of the Internal Revenue Code (IRC §§ 7213 and IRC §§ 7431, respectively); and are generally described in Internal Revenue Service Publication 1075. Additional requirements include those of the Federal Information Security Management Act of 2002. Policy-Factor Question 5: What is the potential effect on voluntary taxpayer compliance? What is the potential effect on voluntary compliance by taxpayers whose tax information will be disclosed? What is the potential effect on general voluntary compliance for other taxpayers? Tax information is collected and developed for the primary purpose of tax administration. Some in the tax community are concerned that granting exceptions to confidentiality could compromise voluntary taxpayer compliance. Voluntary compliance happens when a taxpayer files tax form(s) on time that accurately report tax liability and, if applicable, pays on time the amount of tax due as required by the Internal Revenue Code. Disclosures of tax information can have a range of voluntary compliance effects, either positive or negative (as a benefit or a cost, respectively), so a proposal should be subject to a systematic consideration of possible voluntary compliance effects. Some proposals may enhance voluntary compliance. For example, if participation in a federal program is contingent on past compliance with the tax laws, people may be encouraged to stay compliant. Negative compliance effects are also possible. For example, people may think twice about filing a tax return or properly reporting all of their income if they know that the information may be used by an agency besides IRS in a way that they believe will disadvantage them. The positive or negative effects on voluntary compliance may be limited to the individuals whose tax information is disclosed by IRS, or it may have broader effects on taxpayers in general. One possible general effect of increased use of FTI by entities other than IRS could be a sense among taxpayers that the information reported to IRS is not kept private, so providing information to IRS is a bad idea. For example, according to IRS research, nonfiling of tax returns increased after Treasury began to offset tax refunds to collect nontax debts owed to the government. On the other hand, when IRS contacts third parties for information regarding suspected hidden income or assets, it necessarily discloses that there is some sort of tax law enforcement action in process. When taxpayers are aware that such disclosures may occur, they may be more likely to comply with the tax laws because they feel they have a greater chance of being caught if they do not comply. Limited research is available on the relationship between tax-information confidentiality and voluntary tax compliance. Voluntary compliance is complex and it may not be possible to pinpoint the effects of existing disclosures or future effects of proposed new ones. Moreover, each proposal is likely to have unique potential effects on voluntary compliance. Therefore, a proposal should include a careful assessment of potential effects on voluntary compliance for the directly affected taxpayers and taxpayers in general. Such information could include, for example the number of taxpayers affected, the number and types of tax returns that might not be filed or might be filed inaccurately and resulting types and amounts of information that might not be reported or be reported inaccurately, and the revenue effect of such noncompliance. Because of uncertainty about a given proposal’s effect, such quantitative data could be estimated as a range of potential effects. Policy-Factor Question 6: What is the potential effect on tax administration? How much will implementing the proposal affect current IRS activities or performance? How much will any related safeguard responsibilities add to IRS’s current responsibilities? As with effects on voluntary compliance, disclosures of tax information can negatively or positively affect tax administration. In most cases, it is likely that a proposal to provide information will place more responsibilities on IRS or otherwise increase the agency’s workload. Even if IRS has the data on hand and transmitting them would be simple, IRS will incur some cost to provide the information. Costs will be greater in the likely event that IRS will have to take special steps to extract the information and establish detailed transmittal protocols. If IRS needs to verify the information or correct errors before transmitting it, then the burden on IRS will be even greater. In addition, IRS will likely be responsible for ensuring that the receiving entity has adequate safeguards in place (as discussed in policy-factor question 4), and this will also require IRS staff time and other resources. All of these potential uses of IRS resources will mean either additional funding needs for IRS or the diversion of resources from other functions, perhaps resulting in adverse effects in other areas. Effects on IRS’s operations are not necessarily negative. The expected benefits of some proposals may include improved service or enforcement on the part of IRS. For example, one existing disclosure exception authorizes IRS to publish in newspapers the names of taxpayers who are owed refunds but that IRS cannot locate. In this case, the disclosure may help IRS fulfill a part of its mission and save money because the agency does not have to take more-costly steps to locate these taxpayers. Under Internal Revenue Code (IRC) § 6103, tax returns and tax-return information are confidential and may not to be disclosed unless specifically authorized. However, Congress has enacted some exceptions to confidentiality. Table 1, below, provides a high-level description of current exceptions and where they are found in IRC § 6103. Of the 7 billion disclosures reported in the annual public-disclosure report for calendar year 2010, 99 percent were disclosed under three provisions—about 60 percent (over 4 billion disclosures) were made to state government officials for tax-administration purposes; about 21 percent (nearly 1.5 billion disclosures) were made to congressional committees and their agents, including disclosures to GAO; and about 18 percent (almost 1.3 billion disclosures) were made to the Bureau of the Census. According to the Internal Revenue Service (IRS), some disclosure provisions are rarely or never exercised. To safeguard taxpayer privacy and ensure confidentially, Section 6103 imposes the following requirements for government entities receiving returns and return information from IRS: establish and maintain a permanent system of standardized records of requests including the reason for such requests, date of requests, and any disclosures; establish and maintain a secure area for storage; restrict access to the information to only those whose duties or responsibilities require access and those to whom disclosures are permitted under section 6103; establish and maintain any other safeguards IRS deems necessary or provide IRS a report describing the safeguard procedures; and dispose of the information in an appropriate manner after use. Some disclosure provisions are exempted from the safeguard requirements including disclosures to the taxpayer of his or her tax information, to persons with a material interest, and to third parties for whom the taxpayer has consented to the disclosure. Also exempted are public disclosures (for example, agreements between IRS and persons settling a tax debt for less than the full amount) and disclosures limited to the taxpayer’s mailing address. IRC 6103 also requires record keeping for accountability and oversight. IRS is required to keep records of disclosures made under certain provisions and account for their volumes in a required annual report to the Joint Committee on Taxation (JCT), which in turn issues an annual report Disclosures exempted from record-keeping for public inspection.requirements include certain public disclosures, disclosures to the Department of the Treasury and Department of Justice for tax- administration or litigation purposes, disclosures to persons with a material interest (for example, partners and trustees), disclosure to third parties through taxpayer consent, and disclosures to determine the eligibility for and amount of benefits for certain government programs. In response to growing concern about the harmful consequences that computerized data systems could have on the privacy of personal information, the Secretary of Health, Education & Welfare commissioned an advisory committee in 1972 to examine the extent to to which limitations should be placed on the application of computer technology to record keeping about people. The committee’s final report proposed a set of principles for protecting the privacy and security of personal information, known as the Fair Information Practices. These practices were intended to address what the committee termed a poor level of protection afforded to privacy under existing law, and they underlie the major provisions of the Privacy Act, which was enacted the following year. A revised version of the Fair Information Practices, developed by the Organisation for Economic Co-operation and Development (OECD) in 1980, has been widely adopted and was endorsed by the U.S. Department of Commerce in 1981. This version of the principles was reaffirmed by OECD ministers in a 1998 declaration and further endorsed in a 2006 OECD report.table 2 below. The following threshold and policy-factor questions and subquestions are in figure 2 and throughout the body of the guide. THRESHOLD QUESTIONS FOR SCREENING ANY SECTION 6103 EXCEPTION PROPOSAL 1. Does the proposal have a clear purpose and description of how the tax information will be used? What specific information will be disclosed? To whom will the information be disclosed? What category or categories of taxpayers will be affected? How will the information be used? What purpose will be achieved? 2. Does the proposal consider reasonable alternatives? 3. Is the tax information accurate, complete, and current enough for the stated purpose? 4. Is the tax information to be disclosed relevant and the minimum needed to achieve the stated purpose? 5. Does the proposal address any other statutory, regulatory, or logistical issues necessary for its implementation? Will the proposal require additional legislation besides modifying section 6103 to accomplish the desired result? Does the proposal conflict with existing regulations, rules, or statutes other than Section 6103? How will any logistical or practical barriers or hindrances to implementing the proposed disclosure and use of information for the stated purpose be resolved? POLICY-FACTOR QUESTIONS FOR FURTHER CONSIDERATION OF SECTION 6103 PROPOSALS 1. What are the expected benefits of the proposal to disclose tax information? What are the estimated financial benefits, if any, to be achieved by using tax data? What are the nonfinancial benefits that are expected, if any? 2. What are the expected costs of obtaining and using the tax information to be disclosed? What are the estimated costs for IRS to provide the tax data? What are the estimated costs to the entity receiving the information? What are the expected costs for others affected by the tax- disclosure proposal? 3. What is the potential effect on privacy? To what extent will the proposal adversely affect taxpayer privacy? Is the use of the information transparent and limited? Will sufficient notice and control be provided to individuals? 4. What risks of improper use or unauthorized disclosure does the proposal create and how well does the proposal address those risks? Does the proposal adequately take into account risks of unauthorized use or redisclosure associated with the disclosure? Does the proposal provide adequate safeguards to mitigate those risks? Effects on the tax system 5. What is the potential effect on voluntary taxpayer compliance? What is the potential effect on voluntary compliance by taxpayers whose tax information will be disclosed? What is the potential effect on general voluntary compliance for other taxpayers? 6. What is the potential effect on tax administration? How much will implementing the proposal affect current IRS activities or performance? How much will any related safeguard responsibilities add to IRS’s current responsibilities? In addition to the contact named above, David Lewis, Assistant Director; MaryLynn Sergent, Assistant Director; John de Ferrari, Assistant Director; Marisol Cruz; Ronald Fecso, Chief Statistician; Bertha Dong; Ronald W. Jones; Shirley Jones, Assistant General Counsel; Veronica Mayhand; Donna Miller; Ellen Rominger; Cynthia Saunders; Sabrina Streagle; and Gregory Wilshusen, Director; made key contributions to this guide. Preserving authorized restrictions on information access and disclosure, including means for protecting personally identifiable information. Making a return or return information known to any person in any manner. A set of internationally recognized practices for addressing the privacy of information about individuals, which are the underlying policy for many national laws on privacy and data protection including the U.S. Privacy Act of 1974. Any payment that should not have been made or was made in an incorrect amount under statutory, contractual, administrative, or other legally applicable requirements. The 1974 act that regulates the collection, use, dissemination, and maintenance of personal information by federal agencies. The act applies only to records about individuals that are maintained in a “system of records.” Under the Privacy Act, any item or collection of information about an individual that an agency maintains and that contains that individual’s name, identifying number, symbol, or other identifying particular, such as a fingerprint, voice print, or photograph. A tax or information return, declaration of estimated tax, or claim for refunds under the Internal Revenue Code, which is filed with the IRS by or on behalf of a person. Returns also include any amendments or supplements to a filed return. Return information is broadly defined in the Internal Revenue Code to include a taxpayer’s identity, the nature, source, or amount of income, payments, receipts, deductions, exemptions, credits, assets, liabilities, net worth, lax liability, tax withheld, deficiencies, overassessments, or tax payments; whether the taxpayer’s return was, is being, or will be examined or subject to other investigation or processing; any other data, received by, recorded by, prepared by, furnished to, or collected by the IRS with respect to a return or with respect to the determination of the existence, or possible existence, of liability (or the amount thereof) of any person for any tax, penalty, interest, fine, forfeiture, or other imposition or offense; any part of any written determination or any background file document relating to such written determination … that is not open to inspection under section 6110 for public inspection of written determinations; any advance pricing agreement between the taxpayer and IRS on the taxpayer’s international transactions, and related background information, entered into by a taxpayer; and any agreement between the taxpayer and IRS related to the taxpayer’s tax liability that conclusively closes the case, and any similar agreement along with any related background information. Under the Privacy Act, a group of records under the control of an agency from which information is retrieved by an individual’s name or some other identifier assigned to that individual. The Privacy Act only applies to records about individuals maintained in a system of records. Protective procedures required, as a condition for receiving tax information, for keeping the information confidential, including establishing and maintaining a permanent standardized system of records; storing the information in a secure area; restricting access to the information; returning or disposing of the information after usage is completed; providing other procedures IRS determines necessary or appropriate; and providing a report describing the established safeguard procedures to IRS upon request. Under Section 6103, disclosure of returns and return information to any person or persons designated by the taxpayer, subject to requirements and conditions of Department of the Treasury regulations. A system that relies in part on taxpayers reporting and paying their taxes as required with no direct enforcement and minimal interaction with the government. This bibliography contains selected items on Internal Revenue Code Section 6103 and tax-information confidentiality. Department of the Treasury. National Taxpayer Advocate: 2003 Annual Report to Congress. Washington, D.C.: December 2003. Department of the Treasury. Report to The Congress on Scope and Use of Taxpayer Confidentiality and Disclosure Provisions, vol. I, “Study of General Provisions.” Washington, D.C.: October 2, 2000. Internal Revenue Service. Tax Information Security Guidelines For Federal, State and Local Agencies, Publication 1075. Washington, D.C.: August 2010. Internal Revenue Service. Disclosure and Privacy Law Reference Guide, Publication 4639. Washington, D.C.: September 2011. Joint Committee on Taxation. Study of Present-Law Taxpayer Confidentiality and Disclosure Provisions as Required by Section 3802 of the Internal Revenue Service Restructuring and Reform Act of 1998, vol. 1, “Study of General Disclosure Provisions,” JCS-1-00. Washington, D.C.: January 28, 2000.
The Internal Revenue Service (IRS) receives a great deal of personal information about individuals and businesses. While taxpayers are required to provide this information to IRS under penalty of fine or imprisonment, confidentiality of information reported to IRS is widely held to be a critical element of taxpayers’ willingness to provide information to IRS and comply with the tax laws. As a general rule, anything reported to IRS is held in strict confidence—Internal Revenue Code (IRC) Section 6103 provides that federal tax information is confidential and to be used to administer federal tax laws except as otherwise specifically authorized by law. Although tax information is confidential, nondisclosure of such information is not absolute. Section 6103 contains some statutory exceptions, including instances where Congress determined that the value of using tax information for nontax purposes outweighs the general policy of confidentiality. Since making amendments to Section 6103 in 1976, Congress has expanded the statutory exceptions under which specified taxpayer information can be disclosed to specific parties for specific purposes. Today, Section 6103 exceptions enable law enforcement agencies to use relevant tax information to investigate and prosecute tax and nontax crimes and allow federal and state agencies to use it to verify eligibility for need-based programs and collect child support, among other uses. Periodically, new exceptions to the general confidentiality rule are proposed, and some in the tax community have expressed concern that allowing more disclosures would significantly erode privacy and could compromise taxpayer compliance. In evaluating such proposals, it is important that Congress consider both the benefits expected from a disclosure of federal tax information and the expected costs, including reduced taxpayer privacy, risk of inappropriate disclosure, and negative effects on tax compliance and tax-system administration. This guide is intended to facilitate consistent assessment of proposals to grant or modify Section 6103 exceptions. This guide consists of key questions that can help in (1) screening a proposal for basic facts and (2) identifying policy factors to consider.
In 1941, Congress enacted the Berry Amendment, a domestic source restriction, which required that certain items procured for defense purposes be grown or produced in the United States. Specialty metals, including titanium and titanium alloys, were added to the Berry Amendment in the early 1970s, generally requiring DOD and its contractors to procure specialty metals produced or melted in the United States unless an exception applied allowing specialty metals from foreign countries. In 1978, the “qualifying country exception” was added to the specialty metals domestic source restriction, which waived the requirement for procuring specialty metals produced in the United States when the purchase relates to agreements the United States has with foreign governments, known as “qualifying countries.” Under this exception, aircraft component manufacturers in 23 “qualifying countries” currently are exempt from the specialty metals domestic source restriction and are permitted to use non-domestic produced titanium to manufacture DOD aircraft components. Under the current version, the specialty metals domestic source restriction does not apply to aircraft or aircraft components manufactured in a qualifying country or aircraft or aircraft components containing specialty metals produced or melted in a qualifying country. Table 1 lists the 23 qualifying countries. Under the qualifying country exception, manufacturers in the listed countries have greater flexibility when procuring specialty metals for DOD procurements than U.S. manufacturers. Specifically, they can procure specialty metals from any source—including non-qualifying countries— while a component manufacturer in the United States must procure specialty metals from a source in the United States or a qualifying country, as shown in figure 1. In addition, there are other exceptions to the specialty metals domestic source restriction that allow DOD to procure items containing specialty metals, including titanium in aircraft or aircraft components, from manufacturers in qualifying and non-qualifying countries. For example, one such exception, known as the domestic non-availability exception, waives the specialty metals domestic source restriction when DOD makes a determination that specialty metals, including titanium in aircraft or aircraft components, are not available in the United States in the required form and at a reasonable price. Other exceptions waive the specialty metals domestic source restriction for purchases outside the United States in support of combat operations or purchases in support of contingency operations. The commercial aerospace industry is the largest consumer of titanium metals in the world. DOD estimates that the aerospace industry accounts for 60 to 75 percent of the U.S. market, with military and DOD business accounting for up to 15 percent of the aerospace industry. Titanium metals are important metals in the aircraft industry, in part because they are lightweight, strong, and corrosion resistant, making them common for use in structural airframe and jet engine components. In an airframe, titanium may be used in bulkheads, tail sections, landing gears, wing supports, and fasteners. In engines, titanium may be used in blades, rotating discs, rings, and casings. To manufacture a titanium aircraft component, there are multiple steps in the supply chain, from titanium production through the manufacturing of the finished component. Figure 2 provides an overview of the DOD titanium production and aircraft component manufacturing processes. There are a limited number of titanium producers in the world, and market shares are concentrated in a small number of large producers. Currently, there are four major worldwide producers of high-quality titanium for aerospace: one in Russia (Verkhnaya Salda Metallurgical Production Association) and three in the United States. These three major U.S. titanium metal producers—Allegheny Technologies Incorporated (ATI); RTI International Metals, Inc. (RTI); and Titanium Metals Corporation (TIMET)—account for 94 percent of the U.S. production capacity. Due in part to the limits of worldwide production capacity, titanium products require a long lead time to produce, and manufacturers may order titanium metal years before it is expected to be delivered in a finished product to the customer. For tactical aircraft and engines, DOD generally contracts with six prime contractors—Boeing, Lockheed Martin, Northrop Grumman, General Electric, Pratt & Whitney, and Rolls-Royce—the latter three for engines. These prime contractors generally rely on aircraft component subcontractors to produce titanium aircraft components. Prime contractors or aircraft component manufacturers generally purchase titanium from one of the four major titanium producers. As described above, when selling components to DOD, the specialty metals domestic source restriction limits the U.S. prime contractors’ and aircraft component manufacturers’ purchase of titanium to one of the U.S. or other qualifying country sources. Qualifying country aircraft component manufacturers that sell to DOD have the flexibility to source titanium from any producer, including a non-qualifying country source. Census data from calendar years 2003 to 2012 show variations in U.S. and foreign produced titanium prices for ingot, bar, billet, and sheet. Data for ingot—the titanium form used to produce mill shapes—show that U.S. export and import prices have varied over the last 10 years. The import price was $3.93 less per kilogram than the export price in 2004, while the export price was $7.84 lower per kilogram than the import price in 2007. In 2011 and 2012, export and import prices converged, as seen in figure 3. Census data also show that import and export price differences for mill shapes—the titanium shapes made from ingot and used to manufacture aircraft components—have also varied over the past 8 years. Specifically, the import prices for billet—used to make rotating disk engine components—have, with the exception of 2011, remained less than export prices over the past 8 years; however, the price difference has been reduced from $17.82 per kilogram in 2005 to $2.77 per kilogram in 2012. Price differences for sheet—used to make wing components—have also varied over the last 8 years, with the import price exceeding the export price from 2009 to 2012. The import price of bar—used to make engine blade components—has consistently remained significantly lower than the export price over the last 8 years. Figure 4 shows the historical import and export prices of titanium billet, sheet, and bar. Relevant reports and government and titanium industry officials we interviewed attribute overall price variations to changes in global demand and the supply capacity of titanium producers to meet demand. Industry officials also told us that price differences between the U.S. and foreign produced titanium products can be driven in part by differences in operating costs and production capabilities between U.S. and foreign producers. In addition, officials told us that price differences between titanium products, such as bar and billet, can partly be due to the increased number of steps needed to produce one over the other. While price differences between U.S. and foreign titanium can be large— for example, in 2005, Census bar import price was $25.02 per kilogram and export price more than twice that at $50.37 per kilogram—officials from prime contractors and aircraft component manufacturers told us that price differences have not been large enough to have a significant impact on the cost of a DOD aircraft. For example, data in one DOD study show that a 50 percent increase in the price of titanium would result in about a 1 percent increase in the cost of a DOD aircraft, because titanium cost is generally a small percentage of the overall aircraft cost. Furthermore, prices are typically negotiated through private agreements and can depend on the specific terms of the agreement between the customer and producer. In addition, industry officials noted that U.S. produced titanium has been competitively priced relative to foreign produced titanium. DOD can either directly contract for aircraft components or contract with prime contractors that in turn buy them from component manufacturers. DOD awarded the majority of aircraft component contracts to U.S. manufacturers from fiscal years 2008 through 2012. Specifically, FPDS- NG data over the past 5 years show that DOD directly obligated a total of This includes all $209.6 billion for aircraft component contracts.contracts for aircraft components, some of which may not contain titanium. Of the $209.6 billion, DOD obligated $205.3 billion, or 98 percent, of these purchases to U.S. manufacturers. Additionally, DOD obligated $2.7 billion, or 1.3 percent of the total obligations, to manufacturers in qualifying countries. While obligations to manufacturers in qualifying countries increased from $335 million in 2008 to $661 million in 2012, their market share of DOD obligations remained between approximately 1 to 2 percent each year. Through other authorities available to DOD, the department obligated $1 billion to manufacturers in non-qualifying countries.consistently been awarded the majority of DOD aircraft component obligations each year from fiscal years 2008 through 2012. Industry officials told us that prime contractors’ long term agreements, prime contractors’ approval of titanium producers, and industry consolidation—rather than titanium price—are major factors affecting the ability of U.S. aircraft component manufacturers to compete for DOD contracts. Prime contractors generally manage titanium sourcing decisions for their DOD component manufacturers through long term agreements for titanium that include pre-negotiated prices. Additionally, DOD prime contractors can also require their component manufacturers to purchase titanium from producers that they have approved. Prime contractors’ use of these methods to manage titanium sourcing may reduce potential pricing advantages from the titanium sourcing flexibilities that are available to manufacturers in qualifying countries. In addition, many officials from aircraft component manufacturers identified industry consolidation as a factor that could affect their ability to compete. However, they did not identify competition for DOD contracts from manufacturers in qualifying countries with potential pricing advantages from titanium sourcing flexibilities as a major factor. According to industry officials, DOD aircraft and engine prime contractors leverage their buying power by arranging long term agreements with titanium producers to ensure titanium availability and pre-negotiated prices. These arrangements usually specify titanium product, price, quantity, and delivery schedule. In turn, prime contractors can then direct their titanium aircraft component manufacturers in the United States or in qualifying countries to purchase titanium under these agreements. For example, with rotating components which are strictly controlled, the prime contractors require component manufacturers to use the titanium from the agreement to ensure quality. As such, potential price differences between U.S. produced and foreign produced titanium would not impact the ability of U.S. component manufacturers to compete with manufacturers in qualifying countries if all manufacturers buy titanium from the same agreement. For example, the prime contractor for the F-35 Lightning II has negotiated a long term agreement with a U.S. titanium producer to supply titanium at a pre-negotiated price for the airframe of the F-35 Lightning II. Given this, industry and government officials told us that aircraft component manufacturers working for the prime contractor on the F-35 Lightning II airframe buy titanium from the U.S. producer at the prime contractor’s pre-negotiated price. Industry officials also told us that DOD aircraft and engine prime contractors often direct DOD aircraft component manufacturers to specific titanium producers that they have approved. These officials noted that as a part of the approval process prime contractors typically require titanium producers to undergo a certification process that can be costly and take over a year to ensure titanium quality. Prime contractors then direct their aircraft component manufacturers to use titanium only from an approved producer regardless of whether the aircraft component manufacturer is located in the United States or a qualifying country. For example, officials from one prime contractor told us that their company has only approved U.S. titanium producers for DOD aircraft components and therefore they are certain all titanium for their components are sourced from the United States, even if the component manufacturer is located in a qualifying country. Additionally, officials told us that in some cases prime contractors require titanium to be produced by specific processes for DOD products. For example, one prime contractor told us that it requires its titanium to be produced by cold hearth melting for any components that rotate on its DOD aircraft. According to this prime contractor, currently only two U.S. titanium producers can meet this requirement. Consequently, aircraft component manufacturers in the United States and qualifying countries producing DOD rotating components for this prime contractor must use titanium from one of these U.S. producers to meet the prime contractor’s requirement. Lastly, many of the officials from DOD titanium aircraft component manufacturers that we spoke with identified consolidation between the titanium production and aircraft component manufacturing industries as affecting their ability to compete more than competition from manufacturers in qualifying countries that may have titanium pricing advantages. According to these officials, before consolidation, companies generally performed one step in the processes of producing titanium, manufacturing titanium aircraft components, or assembling a final product for DOD. Thus, component manufacturers had relatively equal access to titanium producers. In recent years, two of the three major U.S. titanium producers have consolidated with aircraft component manufacturers. These consolidations enable one company to perform multiple steps such as producing titanium and manufacturing aircraft components. Officials from manufacturers that have not consolidated told us that they are concerned about their access to titanium from producers that have consolidated with competing aircraft component manufacturers. However, they have not yet seen the impact of industry consolidation on their companies. Moreover, one official from a non-consolidated titanium aircraft component company told us that prime contractors’ sourcing decisions would most likely continue to guarantee access to titanium for his company. We provided a draft of this report to DOD, Commerce, Interior, and Labor for their review and comment. Interior provided technical comments that we incorporated, as appropriate. DOD, Commerce, and Labor did not provide comments. To help ensure accuracy, we also provided pertinent sections of the draft to companies with which we spoke and received clarifying comments which we incorporated as appropriate. We are sending copies of this report to interested congressional committees, as well as the Secretaries of Defense, Commerce, and the Interior, and Acting Secretary of Labor. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or martinb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. To evaluate available data on U.S. and foreign produced titanium prices, we reviewed multiple sources of titanium price data including American Metal Market and Global Insight. These sources do not distinguish titanium prices by the titanium producer or the place of production; therefore these data do not allow for a comparison of U.S. and foreign produced titanium prices. We determined that U.S. Census Foreign Trade Statistics titanium export and import values were the best available proxy for U.S. and foreign produced titanium prices. We identified four harmonized system commodity codes in the Census data that identify titanium products that can be used to produce titanium aircraft components: (1) ingots; (2) billets; (3) bars, rods, profiles, and wire; and (4) blooms, sheet bars, and slabs. For the purposes of this report, we refer to “bars, rods, profiles, and wire” as bar and “blooms, sheet bars, and slabs” as sheet. We used the Census values from calendar years 2003 to 2012 for titanium ingot and from 2005 to 2012 for titanium billet, sheet, and bar. The harmonized system codes that identified products as bar, billet, and sheet in the Census data changed in 2004. To maintain a consistent data set, we limited our analyses to calendar years 2005 through 2012 for these products. The export values represent the selling price of U.S. produced titanium metal. The import values represent the price a U.S. manufacturer would pay for titanium from a foreign titanium producer, and therefore includes duties paid in addition to the value of the titanium product. To verify the appropriateness of these data as proxies, we compared the Census values to other available industry price information, obtained concurrence from knowledgeable government officials that these data were the best available proxy, and determined that these data were sufficiently reliable for our purposes. To identify qualifying country manufacturers’ market share of Department of Defense (DOD) aircraft component contracts, we analyzed available data from the Federal Procurement Data System-Next Generation (FPDS-NG) from fiscal years 2008, the year DOD started collecting data on the use of the qualifying country exception, through 2012. For the purposes of this report, we identified aircraft related contracts as those designated by DOD claimant codes, A1A Airframe and Spares, A1B Aircraft Engines and Spares, and A1C Other Aircraft Equipment, some of which may not include titanium, because FPDS-NG does not specifically identify components by their titanium content. These claimant codes include components such as complete aircraft, airframe assemblies, wing assemblies, landing gears, aircraft engine and parts, aircraft instruments and parts, electrical equipment, and other accessories and parts readily identifiable for aircraft use. We excluded contracts for services and items that were not identified as manufactured end products. We also excluded indefinite delivery contracts, because they do not specify the place of origin in the contract. However, we included orders issued off of those contracts, because they do specify the place of origin in the orders. Countries listed in the Defense Federal Acquisition Regulation Supplement (DFARS) § 225.003(10) were considered qualifying countries for this analysis. Overall market shares are based on the place of origin and place of manufacture fields in FPDS-NG. We compared the FPDS- NG data to DOD reports on supplies manufactured outside the United States and determined the data were sufficiently reliable for our purposes. To determine the market share for component subcontracts awarded by DOD prime contractors, we collected information from selected DOD program offices, aircraft and engine manufacturers, manufacturers of titanium aircraft components—such as engine blades and rotating discs— for DOD products. We also reviewed relevant studies on the titanium and aircraft component industries. To identify the factors that affect the ability of U.S. aircraft component manufacturers to compete for DOD contracts, we reviewed relevant industry studies and interviewed government and industry officials. We interviewed officials from DOD offices including Acquisition, Technology and Logistics; Manufacturing and Industrial Base Policy; Defense Logistics Agency; and the Defense Contract Management Agency Industrial Analysis Center as well as officials from Department of the Interior’s U.S. Geological Survey and the Department of Labor’s Bureau of Labor Statistics. We also interviewed a broad range of relevant industry officials from the four major U.S. and foreign titanium producers, five of the six major DOD aircraft and engine prime contractors, relevant industry associations, and nine titanium aircraft component manufacturers. These aircraft component manufacturers were identified by prime contractors. We conducted this performance audit from September 2012 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, John Neumann, Acting Director; James Kim; Beth Reed Fritts; Tana Davis; Keo Vongvanith; Julia Kennon; Marie Ahearn; Roxanna Sun; Danielle Greene; Namita Bhatia Sabharwal; and Amy Abramowitz made key contributions to the report.
Titanium is used in airframe components and jet engines, in part because it provides greater strength at lower weight than other metals. It is produced in a number of shapes, including bars, billets, and sheets. By law, U.S. manufacturers are generally required to use U.S. produced titanium for DOD aircraft components, unless an exception applies. One exception allows companies in 23 "qualifying countries" to use foreign produced titanium when manufacturing aircraft components for DOD. There is concern that U.S. manufacturers are losing market share to qualifying country manufacturers that are able to use foreign produced titanium. The House Armed Services Committee report accompanying the National Defense Authorization Act for Fiscal Year 2013 mandated that GAO assess the ability of U.S. aircraft component manufacturers to compete for DOD contracts. In this report, GAO assessed (1) available data on titanium prices, (2) available data on U.S. and foreign manufacturers' market share of DOD aircraft component contracts, and (3) the factors that affect the ability of U.S. aircraft component manufacturers to compete for DOD contracts. GAO reviewed Census foreign trade data, the best proxy for titanium prices; federal procurement data; and relevant industry studies; and interviewed a broad range of government and industry officials. Census data show that U.S. and foreign produced titanium prices varied from 2003 through 2012 depending on the product. For example, in 2012, the export price (the proxy for the U.S. price) for titanium bar--used to make engine blades--was higher than the import price (the proxy for the foreign price), while the export price for titanium sheet--used to make wing components--was less than the import price. Industry officials noted that these differences may be due to varying operating costs and titanium production capabilities in different countries and to titanium producers' negotiated agreements with prime contractors or aircraft component manufacturers. U.S. aircraft component manufacturers receive the majority of Department of Defense (DOD) business, whether through direct purchases by the department or through purchases made by its prime contractors. Based on obligation of procurement money, 98 percent of DOD's purchases of aircraft components went to U.S. manufacturers from fiscal years 2008 to 2012. The remainder went to foreign manufacturers, primarily from qualifying countries. DOD prime contractors reported that over the past 10 years they have bought 70 to 100 percent of DOD titanium aircraft components from U.S. manufacturers. Industry officials identified management of titanium sourcing and industry consolidation, rather than titanium price, as factors affecting competition between aircraft component manufacturers for DOD business. Prime contractors generally manage titanium sourcing decisions for their DOD component manufacturers through long term agreements and an approval process that often directs competing manufacturers to the same titanium source, thereby potentially reducing pricing advantages available to aircraft component manufacturers in qualifying countries. Many officials from aircraft component manufacturers also identified industry consolidation of the titanium producers and component manufacturers as a factor that could affect their access to titanium for DOD contracts, although they have not yet seen any adverse impacts. GAO is not making recommendations in this report. Agencies and third parties reviewed GAO's draft report and technical comments received were incorporated as appropriate.
As the primary federal agency that is responsible for protecting and securing GSA facilities and federal employees across the country, FPS has the authority to enforce federal laws and regulations aimed at protecting federally owned and leased properties and the persons on such property, and, among other things, to conduct investigations related to offenses against the property and persons on the property. To protect the over one million federal employees and about 9,000 GSA facilities from the risk of terrorist and criminal attacks, in fiscal year 2007, FPS had about 1,100 employees, of which 541, or almost 50 percent, were inspectors. FPS inspectors are primarily responsible for responding to incidents and demonstrations, overseeing contract guards, completing BSAs for numerous buildings, and participating in tenant agencies’ BSC meetings. About 215, or 19 percent, of FPS’s employees are police officers who are primarily responsible for patrolling GSA facilities, responding to criminal incidents, assisting in the monitoring of contract guards, responding to demonstrations at GSA facilities, and conducting basic criminal investigations. About 104, or 9 percent, of FPS’s 1,100 employees are special agents who are the lead entity within FPS for gathering intelligence for criminal and anti-terrorist activities, and planning and conducting investigations relating to alleged or suspected violations of criminal laws against GSA facilities and their occupants. FPS also has about 15,000 contract guards that are used primarily to monitor facilities through fixed post assignments and access control. According to FPS policy documents, contract guards may detain individuals who are being seriously disruptive, violent, or suspected of committing a crime at a GSA facility, but do not have arrest authority. The level of law enforcement and physical protection services FPS provides at each of the approximately 9,000 GSA facilities varies depending on the facility’s security level. To determine a facility’s security level, FPS uses the Department of Justice’s (DOJ) Vulnerability Assessment Guidelines which are summarized below. A level I facility has 10 or fewer federal employees, 2,500 or fewer square feet of office space and a low volume of public contact or contact with only a small segment of the population. A typical level I facility is a small storefront-type operation, such as a military recruiting office. A level II facility has between 11 and 150 federal employees, more than 2,500 to 80,000 square feet; a moderate volume of public contact; and federal activities that are routine in nature, similar to commercial activities. A level III facility has between 151 and 450 federal employees, more than 80,000 to 150,000 square feet and a moderate to high volume of public contact. A level IV facility has over 450 federal employees, more than 150,000 square feet; a high volume of public contact; and tenant agencies that may include high-risk law enforcement and intelligence agencies, courts, judicial offices, and highly sensitive government records. A Level V facility is similar to a Level IV facility in terms of the number of employees and square footage, but contains mission functions critical to national security. FPS does not have responsibility for protecting any level V buildings. FPS is a reimbursable organization and is funded by collecting security fees from tenant agencies, referred to as a fee-based system. To fund its operations, FPS charges each tenant agency a basic security fee per square foot of space occupied in a GSA facility. In 2008, the basic security fee is 62 cents per square foot and covers services such as patrol, monitoring of building perimeter alarms and dispatching of law enforcement response through its control centers, criminal investigations, and BSAs. FPS also collects an administrative fee it charges tenant agencies for building specific security services such as access control to facilities’ entrances and exits, employee and visitor checks; and the purchase, installation, and maintenance of security equipment including cameras, alarms, magnetometers, and x-ray machines. In addition to these security services, FPS provides agencies with additional services upon request, which are funded through reimbursable Security Work Authorizations (SWA), for which FPS charges an administrative fee. For example, agencies may request additional magnetometers or more advanced perimeter surveillance capabilities. FPS faces several operational challenges, including decreasing staff levels, which has led to reductions in the law enforcement services that FPS provides. FPS also faces challenges in overseeing its contract guards, completing its BSAs in a timely manner, and maintaining security countermeasures. While FPS has taken steps to address these challenges, it has not fully resolved them. Providing law enforcement and physical security services to GSA facilities is inherently labor intensive and requires effective management of available staffing resources. However, since transferring from GSA to DHS, FPS’s staff has declined and the agency has managed its staffing resources in a manner that has reduced security at GSA facilities and may increase the risk of crime or terrorist attacks at many GSA facilities. Specifically, FPS’s staff has decreased by about 20 percent from almost 1,400 employees at the end of fiscal year 2004, to about 1,100 employees at the end of fiscal year 2007, as shown in figure 1. In fiscal year 2008, FPS initially planned to reduce its staff further. However, a provision in the 2008 Consolidated Appropriations Act requires FPS to increase its staff to 1,200 by July 31, 2008. In fiscal year 2010, FPS plans to increase its staff to 1,450, according to its Director. From fiscal year 2004 to 2007, the number of employees in each position also decreased, with the largest decrease occurring in the police officer position. For example, the number of police officers decreased from 359 in fiscal year 2004 to 215 in fiscal year 2007 and the number of inspectors decreased from 600 in fiscal year 2004 to 541 at the end of fiscal year 2007, as shown in figure 2. At many facilities, FPS has eliminated proactive patrol of GSA facilities to prevent or detect criminal violations. The FPS Policy Handbook states that patrol should be used to prevent crime and terrorist attacks. The elimination of proactive patrol has a negative effect on security at GSA facilities because law enforcement personnel cannot effectively monitor individuals who might be surveilling federal buildings, inspect suspicious vehicles (including potential vehicles for bombing federal buildings), and detect and deter criminal activity in and around federal buildings. While the number of contract guards employed in GSA facilities will not decrease and according to a FPS policy document, the guards are authorized to detain individuals, most are stationed at fixed posts from which they are not permitted to leave and do not have arrest authority. According to some regional officials, some contract guards do not exercise their detention authority because of liability concerns. According to several inspectors and police officers in one FPS region, proactive patrol is important in their region because, in the span of one year, there were 72 homicides within 3 blocks of a major federal office building and because most of the crime in their area takes place after hours when there are no FPS personnel on duty. In addition, FPS officials at several regions we visited said that proactive patrol has, in the past, allowed its police officers and inspectors to identify and apprehend individuals that were surveilling GSA facilities. In contrast, when FPS is not able to patrol federal buildings, there is increased potential for illegal entry and other criminal activity at federal buildings. For example, in one city we visited, a deceased individual had been found in a vacant GSA facility that was not regularly patrolled by FPS. FPS officials stated that the deceased individual had been inside the building for approximately three months. In addition, more recently, at this same facility, two individuals who fled into the facility after being pursued by the local police department for an armed robbery were subsequently apprehended and arrested by the local police department. While the local police department contacted FPS for assistance with responding to the incident at the federal facility, FPS inspectors were advised by senior FPS supervisors not to assist the local police department in their search for the suspects because GSA had not paid the security fee for the facility. In addition to eliminating proactive patrol, many FPS regions have reduced their hours of operation for providing law enforcement services in multiple locations, which has resulted in a lack of coverage when most federal employees are either entering or leaving federal buildings or on weekends when some facilities remain open to the public. Moreover, FPS police officers and inspectors in two cities explained that this lack of coverage has left some federal day care facilities vulnerable to loitering by homeless individuals and drug users. The decrease in FPS’s duty hours has also jeopardized police officer and inspector safety, as well as building security. Some FPS police officers and inspectors said that they are frequently in dangerous situations without any FPS backup because many FPS regions have reduced their hours of operation and overtime. Contract guard inspections are important for several reasons, including ensuring that guards comply with contract requirements; have up-to-date certifications for required training, including firearms or cardiopulmonary resuscitation, and that they perform assigned duties. While FPS policy does not specify how frequently guard posts should be inspected, we found that some posts are inspected less than once per year, in part, because contract guards are often posted in buildings hours or days away from the nearest FPS inspector. For example, one area supervisor reported guard posts that had not been inspected in 18 months while another reported posts that had not been inspected in over one year. In another region, FPS inspectors and police officers reported that managers told them to complete guard inspections over the telephone, instead of in person. In addition, when inspectors do perform guard inspections they do not visit the post during each shift; consequently some guard shifts may never be inspected by an FPS official. As a result, some guards may be supervised exclusively by a representative of the contract guard company. Moreover, in one area we visited with a large FPS presence, officials reported difficulty in getting to every post within that region’s required one month period. We obtained a copy of a contract guard inspection schedule in one metropolitan city that showed 20 of 68 post inspections were completed for the month. Some tenant agencies have also noticed a decline in the level of guard oversight in recent years and believe this has led to poor performance on the part of some contract guards. For example, according to Federal Bureau of Investigation (FBI) and GSA officials in one of the regions we visited, contract guards failed to report the theft of an FBI surveillance trailer worth over $500,000, even though security cameras captured the trailer being stolen while guards were on duty. The FBI did not realize it was missing until three days later. Only after the FBI started making inquiries did the guards report the theft to FPS and the FBI. During another incident, FPS officials reported contract guards—who were armed—taking no action as a shirtless suspect wearing handcuffs on one arm ran through the lobby of a major federal building while being chased by an FPS inspector. In addition, one official reported that during an off- hours alarm call to a federal building, the official arrived to find the front guard post empty while the guard’s loaded firearm was left unattended in the unlocked post. We also personally witnessed an incident in which an individual attempted to enter a level IV facility with illegal weapons. According to FPS policies, contract guards are required to confiscate illegal weapons, detain and question the individual, and to notify FPS. In this instance, the weapons were not confiscated, the individual was not detained or questioned, FPS was not notified, and the individual was allowed to leave with the weapons. We will shortly begin a comprehensive review of FPS’s contract guard program for this Subcommittee and other congressional committees. Building security assessments, which are completed by both inspectors and physical security specialists, are the core component of FPS’s physical security mission. However, ensuring the quality and timeliness of them is an area in which FPS continues to face challenges. The majority of inspectors in the seven regions we visited stated that they are not provided sufficient time to complete BSAs. For example, while FPS officials have stated that BSAs for level IV facilities should take between two to four weeks to complete, several inspectors reported having only one or two days to complete assessments for their buildings. They reported that this was due to pressure from supervisors to complete BSAs as quickly as possible. For example, one region is attempting to complete more than 100 BSAs by June 30, 2008, three months earlier than required, because staff will be needed to assist with a large political event in the region. In addition, one inspector in this region reported having one day to complete site work for six BSAs in a rural state in the region. Some regional supervisors have also found problems with the accuracy of BSAs. One regional supervisor reported that an inspector was repeatedly counseled and required to redo BSAs when supervisors found he was copying and pasting from previous BSAs. Similarly, one regional supervisor stated that, in the course of reviewing a BSA for an address he had personally visited, he realized that the inspector completing the BSA falsified information and had not actually visited the site because the inspector referred to a large building when the actual site was a vacant plot of land owned by GSA. In December 2007, the Director of FPS issued a memorandum emphasizing the importance of conducting BSAs in an ethical manner. FPS’s ability to ensure the quality and timeliness of BSAs is also complicated by challenges with the current risk assessment tool it uses to conduct BSAs, the Federal Security Risk Manager system. We have previously reported that there are three primary concerns with this system. First, it does not allow FPS to compare risks from building to building so that security improvements to buildings can be prioritized. Second, current risk assessments need to be categorized more precisely. According to FPS, too many BSAs are categorized as high or low, which does not allow for a refined prioritization of security improvements. Third, the system does not allow for tracking the implementation status of security recommendations based on assessments. According to FPS, GSA, and tenant agency officials in the regions we visited, some of the security countermeasures, such as security cameras, magnetometers, and X-ray machines at some facilities, as well as some FPS radios and BSA equipment, have been broken for months or years and are poorly maintained. At one level IV facility, FPS and GSA officials stated that 11 of 150 security cameras were fully functional and able to record images. Similarly, at another level IV facility, a large camera project designed to expand and enhance an existing camera system was put on hold because FPS did not have the funds to complete the project. FPS officials stated that broken cameras and other security equipment can negate the deterrent effect of these countermeasures as well as eliminate their usefulness as an investigative tool. For example, according to FPS, it has investigated significant crimes at multiple level IV facilities, but some of the security cameras installed in those buildings were not working properly, preventing FPS investigators from identifying the suspects. Complicating this issue, FPS officials, GSA officials, and tenant representatives stated that additional countermeasures are difficult to implement because they require approval from BSCs, which are composed of representatives from each tenant agency who generally are not security professionals. In some of the buildings that we visited, security countermeasures were not implemented because BSC members cannot agree on what countermeasures to implement or are unable to obtain funding from their agencies. For example, a FPS official in a major metropolitan city stated that over the last 4 years inspectors have recommended 24-hour contract guard coverage at one high-risk building located in a high crime area multiple times, however, the BSC is not able to obtain approval from all its members. In addition, several FPS inspectors stated that their regional managers have instructed them not to recommend security countermeasures in BSAs if FPS would be responsible for funding the measures because there is not sufficient money in regional budgets to purchase and maintain the security equipment. According to FPS, it has a number of ongoing efforts that are designed to address some of its longstanding challenges. For example, in 2007, FPS decided to adopt an inspector-based workforce approach to protect GSA facilities. Under this approach, the composition of FPS’s workforce will change from a combination of inspectors and police officers to mainly inspectors. The inspectors will be required to complete law enforcement activities such as patrolling and responding to incidents at GSA facilities concurrently with their physical security activities. FPS will also place more emphasis on physical security, such as BSAs, and less emphasis on the law enforcement part of its mission; contract guards will continue to be the front-line defense for protection at GSA facilities; and there will be a continued reliance on local law enforcement. According to FPS, an inspector-based workforce will help it to achieve its strategic goals such as ensuring that its staff has the right mix of technical skills and training needed to accomplish its mission and building effective relationships with its stakeholders. However, the inspector-based workforce approach presents some additional challenges for FPS. For example, the approach does not emphasize law enforcement responsibilities, such as proactive patrol. Reports issued by multiple government entities acknowledge the importance of proactive patrol in detecting and deterring terrorist surveillance teams, which use information such as the placement of armed guards and proximity to law enforcement agency stations when choosing targets and planning attacks. Active law enforcement patrols in and around federal facilities can potentially disrupt these sophisticated surveillance and research techniques. In addition, having inspectors perform both law enforcement and physical security duties simultaneously may prevent some inspectors from responding to criminal incidents in a timely manner and patrolling federal buildings. FPS stated that entering into memorandums of agreement with local law enforcement agencies was an integral part of the inspector-based workforce approach because it would ensure law enforcement response capabilities at facilities when needed. According to FPS’s Director, the agency recently decided not to pursue memorandums of agreement with local law enforcement agencies, in part, because of reluctance on the part of local law enforcement officials to sign such memorandums. In addition, FPS believes that the agreements are not necessary because 96 percent of the properties in its inventory are listed as concurrent jurisdiction facilities where both federal and state governments have jurisdiction over the property. Nevertheless, the agreements would clarify roles and responsibilities of local law enforcement agencies when responding to crime or other incidents. However, FPS also provides facility protection to approximately 400 properties where the federal government maintains exclusive federal jurisdiction. Under exclusive federal jurisdiction, the federal government has all of the legislative authority within the land area in question and the state has no residual police powers. Furthermore, state and local law enforcement officials are not authorized to enforce state and local laws or federal laws and regulations at exclusive federal jurisdiction facilities. According to ICE’s legal counsel, if the Secretary of Homeland Security utilized the facilities and services of state and local law enforcement agencies, state and local law enforcement officials would only be able to assist FPS in functions such as crowd and traffic control, monitoring law enforcement communications and dispatch, and training. Memorandums of agreement between FPS and local law enforcement agencies would help address the jurisdictional issues that prevent local law enforcement agencies from providing assistance at facilities with exclusive federal jurisdiction. As an alternative to memorandums of agreement, according to FPS’s Director, the agency will rely on the informal relationships that exist between local law enforcement agencies and FPS. However, whether this type of relationship will provide FPS with the type of assistance it will need under the inspector-based workforce is unknown. Officials from five of the eight local law enforcement agencies we interviewed stated that their agency did not have the capacity to take on the additional job of responding to incidents at federal buildings and stated that their departments were already strained for resources. FPS and local law enforcement officials in the regions we visited also stated that jurisdictional authority would pose a significant barrier to gaining the assistance of local law enforcement agencies. Representatives of local law enforcement agencies also expressed concerns about being prohibited from entering GSA facilities with service weapons, especially courthouses. Similarly, local law enforcement officials in a major city stated that they cannot make an arrest or initiate a complaint on federal property, so they have to wait until a FPS officer or inspector arrives. Another effort FPS has begun is to address its operational challenges by recruiting an additional 150 inspectors to reach the mandated staffing levels in the fiscal year 2008 Consolidated Appropriations Act. According to the Director of FPS, the addition of 150 inspectors to its current workforce will allow FPS to resume providing proactive patrol and 24- hour presence based on risk and threat levels at some facilities. However, these additional 150 inspectors will be assigned to eight of FPS’s 11 regions and thus will not have an impact on the three regions that will not receive them. In addition, while this increase will help FPS to achieve its mission, this staffing level is still below the 1,279 employees that FPS had at the end of fiscal year 2006 when, according to FPS officials, tenant agencies experienced a decrease in service. FPS’s Risk Management Division is also in the process of developing a new tool referred to as the Risk Assessment Management Program (RAMP) to replace its current system (FSRM) for completing BSAs. According to FPS, a pilot version of RAMP is expected to be rolled out in fiscal year 2009. The RAMP will be accessible to inspectors via a secure wireless connection anywhere in the United States and will guide them through the process of completing a BSA to ensure that standardized information is collected on all GSA facilities. According to FPS, once implemented, RAMP will allow inspectors to obtain information from one source, generate reports automatically, enable the agency to track selected countermeasures throughout their lifecycle, address some issues with the subjectivity of BSAs, and reduce the amount of time spent on administrative work by inspectors and managers. FPS funds its operations through the collection of security fees charged to tenant agencies for security services. However, until recently these fees have not been sufficient to cover its projected operational costs. FPS has addressed this gap in a variety of ways. When FPS was located in GSA it received additional funding from the Federal Buildings Fund to cover the gap between collections and costs. Since transferring to DHS, to make up for the projected shortfalls to ensure that security at GSA facilities would not be jeopardized, and to avoid a potential Anti-deficiency Act violation in fiscal year 2005, FPS instituted a number of cost saving measures that included restricted hiring and travel, limited training and overtime, and no employee performance awards. In addition, in fiscal year 2006, DHS had to transfer $29 million in emergency supplemental funding to FPS. FPS also increased the basic security fee charged to tenant agencies from 35 cents per square foot in fiscal year 2005 to 62 cents per square foot in fiscal year 2008. Because of these actions, fiscal year 2007 was the first year FPS’s collections were sufficient to cover its costs. FPS also projects that collections will cover its costs in fiscal year 2008. In fiscal year 2009, FPS’s basic security fee will increase to 66 cents per square foot, which represents the fourth time FPS has increased the basic security fee since transferring to DHS. However, according to FPS, its cost savings measures have had adverse implications, including low morale among staff, increased attrition and the loss of institutional knowledge, as well as difficulties in recruiting new staff. In addition, several FPS police officers and inspectors said that overwhelming workloads, uncertainty surrounding their job security, and a lack of equipment have diminished morale within the agency. These working conditions could potentially impact the performance and safety of FPS personnel. FPS officials said the agency has lost many of their most experienced law enforcement staff in recent years and several police officers and inspectors said they were actively looking for new jobs outside FPS. For example, FPS reports that 73 inspectors, police officers, and physical security specialists left the agency in fiscal year 2006, representing about 65 percent of the total attrition in the agency for that year. Attrition rates have steadily increased from fiscal years 2004 through 2007, as shown in figure 3. For example, FPS’s overall attrition rate increased from about 2 percent in fiscal year 2004 to about 14 percent in fiscal year 2007. The attrition rate for the inspector position has increased, despite FPS’s plan to move to an inspector-based workforce. FPS officials said its cost saving measures have helped the agency address projected revenue shortfalls. The measures have been eliminated in fiscal year 2008. In addition, according to FPS, these measures will not be necessary in fiscal year 2009 because the basic security fee was increased and staffing has decreased. FPS’s Basic Security Fee Does Not Account for Risk and Raises Questions about Equity FPS’s primary means of funding its operations is the fee it charges tenant agencies for basic security services, as shown in figure 4. Some of the basic security services covered by this fee include law enforcement activities at GSA facilities, preliminary investigations, the capture and detention of suspects, and BSAs, among other services. The basic security fee does not include contract guard services. However, this fee does not fully account for the risk faced by particular buildings or the varying levels of basic security services provided, and does not reflect the actual cost of providing services. In fiscal year 2008, FPS charged 62 cents per square foot for basic security and has been authorized to increase the rate to 66 cents per square foot in fiscal year 2009. FPS charges federal agencies the same basic security fee regardless of the perceived threat to that particular building or agency. Although FPS categorizes buildings into security levels based on its assessment of the building’s risk and size, this categorization does not affect the security fee charged by FPS. For example, level I facilities typically face less risk because they are generally small storefront-type operations with a low level of public contact, such as a small post office or Social Security Administration office. However, these facilities are charged the same basic security fee of 62 cents per square foot as a level IV facility that has a high volume of public contact and may contain high-risk law enforcement and intelligence agencies and highly sensitive government records. In addition, FPS’s basic security rate has raised questions about equity because federal agencies are required to pay the fee regardless of the level of service FPS provides or the cost of providing the service. For instance, in some of the regions we visited, FPS officials described situations in which staff is stationed hundreds of miles from buildings under its responsibility. Many of these buildings rarely receive services from FPS staff and rely mostly on local police for law enforcement services. However, FPS charges these tenant agencies the same basic security fees as those buildings in major metropolitan areas in which numerous FPS police officers and inspectors are stationed and are available to provide security services. FPS’s cost of providing services is not reflected in its basic security charges. For instance, a June 2006 FPS workload study estimating the amount of time spent on various security services showed differences in the amount of resources dedicated to buildings at various security levels. The study said that FPS staff spend approximately six times more hours providing security services to higher-risk buildings (levels III and IV buildings) compared to lower-risk buildings (levels I and II buildings). In addition, a 2007 Booz Allen Hamilton report of FPS’s operational costs found that FPS does not link the actual cost of providing basic security services with the security fees it charges tenant agencies. The report recommends incorporating a security fee that takes into account the complexity or the level of effort of the service being performed for the higher level security facilities. The report states that FPS’s failure to consider the costs of protecting buildings at varying risk levels could result in some tenants being overcharged. We also have reported that basing government fees on the cost of providing a service promotes equity, especially when the cost of providing the service differs significantly among different users, as is the case with FPS. Several stakeholders have raised questions about whether FPS has an accurate understanding of the cost of providing security at GSA facilities. An ICE Chief Financial Office official said FPS has experienced difficulty in estimating its costs because of inaccurate cost data. In addition, OMB officials said they have asked FPS to develop a better cost accounting system in past years. The 2007 Booz Allen Hamilton report found that FPS does not have a methodology to assign costs to its different security activities and that it should begin capturing the cost of providing various security services to better plan, manage and budget its resources. We have also previously cited problems with ICE’s and FPS’s financial system, including problems associated with tracking expenditures. We also have previously reported on the importance of having accurate cost information for budgetary purposes and to set fees and prices for services. We have found that without accurate cost information it is difficult for agencies to determine if fees need to be increased or decreased, accurately measure performance, and improve efficiency. To determine how well it is accomplishing its mission to protect GSA facilities, FPS has identified some output measures, such as determining whether security countermeasures have been deployed and are fully operational, the amount of time it takes to respond to an incident and the percentage of BSAs completed on time. Output measures assess activities, not the results of those activities. However, FPS has not developed outcome measures to evaluate the results and the net effect of its efforts to protect GSA facilities. While output measures are helpful, outcome measures are also important because they can provide FPS with broader information on program results, such as the extent to which its decision to move to an inspector-based workforce will enhance security at GSA facilities or help identify the security gaps that remain at GSA facilities and determine what action may be needed to address them. The Government Performance and Results Act requires federal agencies to, among other things, measure agency performance in achieving outcome oriented goals. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their performance. In addition, we and other federal agencies have maintained that adequate and reliable performance measures are a necessary component of effective management. We have also found that performance measures should provide agency managers with timely, action-oriented information in a format conducive to helping them make decisions that improve program performance, including decisions to adjust policies and priorities. FPS is also limited in its ability to assess the effectiveness of its efforts to protect GSA facilities, in part, because it does not have a data management system that will allow it to provide complete and accurate information on its security program. Without a reliable data management system, it is difficult for FPS and others to determine the effectiveness of its efforts to protect GSA facilities or for FPS to accurately track and monitor incident response time, effectiveness of security countermeasures, and whether BSAs are completed on time. Currently, FPS primarily uses the Web Records Management System (WebRMS) and Security Tracking System to track and monitor output measures. However, FPS acknowledged that there are weaknesses with these systems which make it difficult to accurately track and monitor its performance. In addition, according to many FPS officials at the seven regions we visited, the data maintained in WebRMS may not be a reliable and accurate indicator of crimes and other incidents because FPS does not write an incident report for every incident, all incidents are not entered into WebRMS and because the types and definitions of items prohibited in buildings vary not only region by region, but also building by building. For example, a can of pepper spray may be prohibited in one building, but allowed in another building in the same region. According to FPS, having fewer police officers has also decreased the total number of crime and incident reports entered in WebRMS because there is less time spent on law enforcement activities. The officials in one FPS region we visited stated that two years ago there were 25,000 reports filed through WebRMS, however this year they are projecting about 10,000 reports because there are fewer FPS police officers to respond to an incident and write a report if necessary. In conclusion, Mr. Chairman, our work shows that FPS has faced and continues to face multiple challenges in ensuring that GSA facilities, their occupants, and visitors, are protected from crime and the risk of terrorist attack. In the report we issued last week, we recommended that the Secretary of Homeland Security direct the Director of FPS to develop and implement a strategic approach to manage its staffing resources; clarify roles and responsibilities of local law enforcement agencies in regards to responding to incidents at GSA facilities; improve FPS’s use of the fee- based system by developing a method to accurately account for the cost of providing security services to tenant agencies; assess whether FPS’s current use of a fee-based system or an alternative funding mechanism is the most appropriate manner to fund the agency; and develop and implement specific guidelines and standards for measuring its performance including the collection and analysis of data. DHS concurred with these recommendations and we are encouraged that FPS is in the process of addressing them. This concludes our testimony. We are pleased to answer any questions you might have. For further information on this testimony, please contact Mark Goldstein at 202-512-2834 or by email at goldsteinm@gao.gov. Individuals making key contributions to this testimony include Daniel Cain, Tammy Conquest, Colin Fallon, Katie Hamer, Daniel Hoy, and Susan Michal-Smith. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Federal Protective Service (FPS) is responsible for providing physical security and law enforcement services to about 9,000 General Services Administration (GSA) facilities. To accomplish its mission of protecting GSA facilities, FPS currently has an annual budget of about $1 billion, about 1,100 employees, and 15,000 contract guards located throughout the country. GAO was asked to provide information and analysis on challenges FPS faces including ensuring that it has sufficient staffing and funding resources to protect GSA facilities and the over one million federal employees as well as members of the public that work in and visit them each year. GAO discusses (1) FPS's operational challenges and actions it has taken to address them, (2) funding challenges, and (3) how FPS measures the effectiveness of its efforts to protect GSA facilities. This testimony is based on our recently issued report (GAO-08-683) to this Subcommittee. FPS faces several operational challenges that hamper its ability to accomplish its mission and the actions it has taken may not fully resolve these challenges. FPS's staff has decreased by about 20 percent from fiscal years 2004 through 2007. FPS has also decreased or eliminated law enforcement services such as proactive patrol in many FPS locations. Moreover, FPS has not resolved longstanding challenges, such as improving the oversight of its contract guard program, maintaining security countermeasures, and ensuring the quality and timeliness of building security assessments (BSA). For example, one regional supervisor stated that while reviewing a BSA for an address he personally visited he realized that the inspector completing the BSA had falsified the information because the inspector referred to a large building when the actual site was a vacant plot of land owned by GSA. To address some of these operational challenges, FPS is currently changing to an inspector based workforce, which seeks to eliminate the police officer position and rely primarily on FPS inspectors for both law enforcement and physical security activities. FPS is also hiring an additional 150 inspectors. However, these actions may not fully resolve the challenges FPS faces, in part because the approach does not emphasize law enforcement responsibilities. Until recently, the security fees FPS charged to agencies have not been sufficient to cover its costs and the actions it has taken to address the shortfalls have had adverse implications. For example, the Department of Homeland Security (DHS) transferred emergency supplemental funding to FPS. FPS restricted hiring and limited training and overtime. According to FPS officials, these measures have had a negative effect on staff morale and are partially responsible for FPS's high attrition rates. FPS was authorized to increase the basic security fee four times since it transferred to DHS in 2003, currently charging tenant agencies 62 cents per square foot for basic security services. Because of these actions, FPS's collections in fiscal year 2007 were sufficient to cover costs, and FPS projects that collections will also cover costs in fiscal year 2008. However, FPS's primary means of funding its operations--the basic security fee--does not account for the risk faced by buildings, the level of service provided, or the cost of providing services, raising questions about equity. Stakeholders expressed concern about whether FPS has an accurate understanding of its security costs. FPS has developed output measures, but lacks outcome measures to assess the effectiveness of its efforts to protect federal facilities. Its output measures include determining whether security countermeasures have been deployed and are fully operational. However, FPS does not have measures to evaluate its efforts to protect federal facilities that could provide FPS with broader information on program outcomes and results. FPS also lacks a reliable data management system for accurately tracking performance measures. Without such a system, it is difficult for FPS to evaluate and improve the effectiveness of its efforts, allocate its limited resources, or make informed risk management decisions.
The just compensation clause of the Fifth Amendment provides that the government may not take private property for public use without just compensation. Initially, this clause applied to the government’s exercise of its power of eminent domain. In eminent domain cases, the government invokes its eminent domain power by filing a condemnation action in court against a property owner to establish that the taking is for a public use or purpose, such as the construction of a road or school, and to allow the court to determine the amount of just compensation due the property owner. In such cases, the government takes title to the property, providing the owner just compensation based on the fair market value of the property at the time of the taking. Supreme Court decisions later established that regulatory takings are also subject to the just compensation clause. In contrast to the direct taking associated with eminent domain, regulatory takings arise from the consequences of government regulatory actions that affect private property. In these cases, the government does not take action to condemn the property or offer compensation, but rather effectively takes the property by denying or limiting the owner’s planned use of the property, referred to as an inverse taking. An owner claiming that a government action has effected a taking and that compensation is owed must initiate suit against the government to obtain any compensation due. The court awards just compensation to the owner upon concluding that a taking has occurred. In 1987, concerned with the number of pending regulatory takings lawsuits and with court decisions seen as increasing the exposure of the federal government to liability for such takings, the President’s Task Force on Regulatory Relief began drafting an executive order to direct executive branch agencies to more carefully consider the takings implications of their proposed regulations or other actions. The President issued this EO on March 15, 1988. According to the EO, actions subject to its provisions include regulations, proposed regulations, proposed legislation, comments on proposed legislation, or other policy statements that, if implemented or enacted, could cause a taking of private property. Such actions may include rules and regulations that propose or implement licensing, permitting, or other conditions, requirements or limitations on private property use. The EO also enumerates agency actions that are not subject to the order, including the exercise of the power of eminent domain and law enforcement actions involving seizure, for violations of law, of property for forfeiture, or as evidence in criminal proceedings. The EO also requires the U.S. Attorney General to issue general guidelines to help agencies evaluate the takings implications of their proposed actions, and, as necessary, update these guidelines to reflect fundamental changes in takings case law resulting from Supreme Court decisions. The guidelines provide that agencies should assess takings implications of their proposed actions to determine their potential for a compensable taking and that decision makers should consider other viable alternatives, when available, to meet statutorily required objectives while minimizing the potential impact on the public treasury. In cases where alternatives are not available, the potential takings implications are to be noted, such as in a notice of proposed rulemaking. The guidelines also include an appendix that provides detailed information regarding some of the case law surrounding considerations of whether a taking has occurred and the extent of any potential just compensation claim. For example, the appendix discusses the Penn Central Transportation Co. v. City of New York case in which the Supreme Court set out a list of three “influential factors” for determining whether an alleged regulatory taking should be compensated: (1) the economic impact of the government action, (2) the extent to which the government action interfered with reasonable investment-backed expectations, and (3) the “character” of the government action. However, the appendix provides a caveat that it is not intended to be an exhaustive account of relevant case law, adding that the consideration of the potential takings of an action as well as the applicable case law will normally require close consultation between agency program personnel and agency counsel. Agency officials and other experts differ on the need to update the Attorney General’s guidelines to reflect changes in regulatory takings case law since 1988. Justice officials said that the guidelines had not been updated since 1988 because there had been no fundamental changes in regulatory takings case law, which is the EO’s criterion for an update. They said that the guidelines, as written, are still sufficient to determine the risk of a regulatory taking and that subsequent Supreme Court decisions have not substantially changed this analysis. For example, officials said the three-factor test outlined in the 1978 Penn Central case remains the most important guidance for analyzing the potential for a taking that is subject to just compensation. Justice officials also emphasized that the guidelines address only a general framework for agencies’ evaluations of the takings implications of their proposed actions and thus are not intended to be an up-to-date, comprehensive primer on all possible considerations. The guidelines state that the individual agencies must still conduct their own evaluations, including necessary legal research, when assessing the takings potential of a proposed regulation or action. The four agencies were divided on the need to update the guidelines. Corps and EPA officials supported Justice’s position that the guidelines do not need to be updated. Corps staff indicated that, based on their review of relevant Supreme Court decisions since 1988, no fundamental change in the criteria for assessing potential takings had occurred and thus no update to the Attorney General’s guidelines was necessary. Similarly, EPA staff said that some of the takings cases decided since 1988 gave the appearance that the Court was changing the three-pronged test set out in the Penn Central decision. However, these officials noted that more recent cases have returned to the Penn Central test, thereby removing the need for updating the Attorney General’s guidelines. In contrast, officials at Interior and Agriculture said that it would be helpful if Justice updated the summary of key takings cases contained in an appendix to the guidelines to reflect significant developments in this case law over the past 15 years. Other legal experts said that the Attorney General’s guidelines should be updated, noting that regulatory takings case law had not remained static over the past 15 years. For example, legal experts concerned with the protection of private property rights said that there had been significant developments in regulatory takings case law since 1988. These experts said that the mere passage of time and the sheer number of regulatory takings cases concluded since 1988 argued for updating the guidelines. In another case, a law professor, who has written and lectured on the issue of regulatory takings, said that the level of specificity with which Justice prepared the original guidelines sets a precedent that calls for updating these guidelines to reflect the many important changes in regulatory takings case law since 1988. The Attorney General has issued supplemental guidelines required by the EO for three of the four agencies—the Corps, EPA, and Interior. The EO directed the Attorney General, in consultation with each executive branch agency, to issue supplemental guidelines for each agency as appropriate to the specific obligations of that agency. The Attorney General’s guidelines state that the supplement should prescribe implementing procedures that will aid the agency in administering its specific programs under the analytical and procedural framework presented in the EO and the Attorney General’s guidelines, including the preparation of takings implication assessments. In general, the three agencies’ supplemental guidelines include specific categorical exclusions from the EO’s provisions for certain agency actions. The Attorney General has not issued supplemental guidelines for Agriculture because Justice and Agriculture could not agree on how to assess the potential takings implications of the latter agency’s actions related to grazing and special use permits covering applicants’ use of public lands. Agriculture argued that such permit actions should be exempt from the EO’s requirements or, if not, that the agency should be allowed to do a generic takings implication assessment that would apply to multiple permits. Agriculture officials indicated that Justice officials did not agree with these suggestions, and the matter was never resolved. While lacking supplemental guidelines, Agriculture officials said that their implementation of the EO and the Attorney General’s guidelines has not been encumbered. Justice officials agreed with this assessment. Although the EO’s requirements have not been amended or revoked since 1988, the four agencies’ implementation of some of its key provisions has changed over time in response to subsequent OMB guidance. For example, the agencies no longer prepare annual compilations of just compensation awards or account for these awards in their budget documents because OMB guidance issued in 1994 advised agencies that such information was no longer required. According to OMB, this information is not needed because the number and amount of these awards are small and the awards were not paid from the agencies’ appropriations but are paid from the Department of the Treasury’s Judgment Fund. In addition, because the number and dollar amounts of just compensation awards and settlements paid by the federal government annually are relatively small, OMB officials said the overall budget implications for the government are small. Hence, in their view, information on just compensation awards in agency annual budget submissions was also unnecessary. OMB and Justice officials said that the relative lack of regulatory takings cases and associated just compensation awards each year is an indication that the EO has succeeded in raising agencies’ awareness of the need to carefully consider the potential takings implications of their actions. Although OMB no longer requires agencies to comply with these EO provisions, the provisions remain in the EO. However, OMB and Justice officials noted that because executive orders are not the equivalent of statutory requirements, non-compliance with these provisions does not have the same implications. Instead, executive orders are policy tools for the executive branch and are subject to changing interpretation and emphasis with each new administration. Other provisions of the EO have been implemented. For example, each of the four agencies has designated an official to be responsible for ensuring that the agency’s actions comply with the EO’s requirements. In general, the responsible official at each agency is the agency’s senior legal official.EPA’s and Interior’s supplemental guidelines specifically identify the designated official by title. Agency officials could not provide us with any documentary evidence of this designation for Agriculture and the Corps, but agency officials assured us that their senior legal official fulfilled this role. Officials at each of the four agencies said that they fully consider the potential takings implications of their planned regulatory actions, but again provided us with limited documentary evidence to support this claim. Agencies provided us a few written examples of takings implication assessments. Agency officials said that these assessments are not always documented in writing, and, with the passage of time, any assessments that were put in writing may no longer be on file. They also noted that these assessments are internal, predecisional documents that generally are not subject to the Freedom of Information Act or judicial review. As a result, they said, the assessments are not typically retained in a central file for a rulemaking or other decision, and therefore difficult to locate. For example, the Corps internal guidance states that takings implication assessments should be removed from the related administrative file once the agency has concluded a decision on a permit. In addition, agency officials also noted that they do not maintain a master file of all takings implication assessments. In many cases, attorneys assigned to field offices conduct these assessments. In these cases, agency officials said that headquarters staff might not have copies. Nevertheless, with the exception of EPA, each agency provided us with some examples of written takings implication assessments. These assessments varied in form and the level of detail included. To determine if and how the four agencies documented their compliance with the EO when issuing regulatory actions, we reviewed information contained in Federal Register notices on takings implication assessments related to their proposed and final rulemakings, but had limited success. Specifically, 375 notices mentioned the EO in 1989, 1997, and 2002, but relatively few provided an indication as to whether a takings implication assessment was done. Most of these rules included only a simple statement that the EO was considered and, in general, that there were no significant takings implications. In contrast, 50 specified that an assessment of the rule’s potential for takings implications was prepared, and of these, 10 noted that the rule had the potential for “significant” takings implications. Given the limited amount of information available from the agencies or available in the Federal Register notices that we reviewed, we could not fully assess the extent to which agencies considered the EO’s requirements. According to Justice data, 44 regulatory takings cases against the four agencies were concluded during fiscal years 2000 through 2002. Fourteen of these 44 cases resulted in government payments. In 2 of these 14 cases, the U.S. Court of Federal Claims decided in favor of the plaintiff, resulting in awards of just compensation totaling about $4.2 million. The Justice Department settled in 12 other cases providing total payments of about $32.3 million. Of these combined 14 cases with awards or settlement payments, 10 related to actions of Interior, 3 to actions of the Corps , and 1 to an action of Agriculture. In general, the settled cases were concluded with compromise agreements, including stipulated dismissals or settlement agreements, reached among the litigants and approved by the applicable court. In these cases, the document usually stated that the parties had agreed to end the case with a payment to the plaintiff, but no finding that a taking occurred. For example, in one case concluded in 2001 that alleged a taking of an oil and gas lease on federal land managed by Interior’s Bureau of Land Management, the litigants negotiated a stipulated dismissal that provided that a payment of $3 million be made to the plaintiffs to cover all claims. However, the stipulated dismissal also provided that the final outcome should not be construed as an admission of liability by the United States government for a regulatory taking. In addition, the dismissal required that the plaintiffs surrender their interests in a portion of the lease. In the two cases with award payments, the court concluded that a taking had occurred and thus it awarded just compensation. Of the 14 cases with awards or settlement payments, the 10 Interior cases generally dealt with permits related to mining claims on federal lands managed by that agency or matters related to granting access on public lands. For example, one case involving mining claims resulted in the plaintiff receiving a settlement of almost $4 million. In another case, involving the denial of preferred access to a lake on land managed by the agency, the plaintiff received a settlement of $100,000. The Corps’ three cases generally related to a denial or issuance, with conditions, of wetlands permits for private property. One of these cases, concerning the filling of a wetland in Florida, resulted in a settlement payment of $21 million, accounting for more than half of the total compensation awards and settlement payments related to the 14 cases. The Agriculture case concerned the title to mineral rights in a national forest managed by the agency. The plaintiff received an award of $353,000 in this case. (Appendix I provides further information on just compensation awards or settlement payments, by agency, for cases concluded during fiscal years 2000 through 2002.) In addition to the cases concluded during fiscal years 2000 through 2002, Justice reported that an additional 54 regulatory takings cases involving the four agencies were still pending resolution at the end of fiscal year 2002. Of the 54 pending cases, 30 involved Interior, 14 involved the Corps, 7 involved Agriculture, and 3 involved EPA. The EO’s requirements for assessing the takings implications of planned regulatory actions applied to only 3 of these 14 cases. For the other 11 cases, the associated regulatory action either predated the EO’s issuance or the matter at hand was otherwise excluded from the EO’s provisions. Based on evidence made available to us, the relevant agency assessed the takings potential of its action in only one of the three cases subject to the EO’s requirements. In that case, the Corps denied a wetlands permit sought by the plaintiff to fill wetlands on the plaintiff’s property in order to develop a commercial medical center. The plaintiff brought suit against the agency alleging a compensable taking had occurred. In its takings implication assessment, the Corps had concluded that the permit denial did not constitute a taking because the applicant was still free to use the property for other purposes that did not involve filling the wetland. Therefore, the Corps concluded that the permit denial did not deprive the plaintiff of all viable economic use of the property. However, the case ended with a stipulated dismissal and a payment of $880,000 to the plaintiff. In the two other cases, based on information Interior provided to us, it appears that the EO would apply. Interior stated that, in hindsight, it appears that the EO may have applied in the first case involving a denial of applications to drill for oil and gas on federal land. Although a formal takings implication assessment was not prepared in this case, Interior stated there was a “good faith” discussion of its takings implications within the department. The case concluded with settlement of $380,000 to the plaintiff for attorney fees. In the second case, concerning anticipated and actual denial of oil and gas drilling permits for federal land, Interior was not certain whether the EO actually applied to the case in the first place, but believed that a takings assessment had been done and documented in a related environmental impact statement. However, Interior was unable to provide us a copy of this document. We believe that the EO applied and, lacking documentation, that no formal assessment was done. This case concluded with a settlement of $3 million for the plaintiff. Mr. Chairman, this completes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841. Doreen Feldman, Jim Jones, Ken McDowell, Jonathan McMurray, and John Scott, made key contributions to this statement. (Dollars in thousands) Number of Cases with Payments $32,301 $36,505 This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Each year federal agencies issue numerous proposed or final rules or take other regulatory actions that may potentially affect the use of private property. Some of these actions may result in the property owner being owed just compensation under the Fifth Amendment. In 1988 the President issued Executive Order 12630 on property rights to ensure that government actions affecting the use of private property are undertaken on a well-reasoned basis with due regard for the potential financial impacts imposed on the government. This testimony is based on our recent report on the compliance of the Department of Justice and four agencies--the Department of Agriculture, the Army Corps of Engineers, the Environmental Protection Agency, and the Department of the Interior--with the executive order. (Regulatory Takings: Implementation of Executive Order on Government Actions Affecting Private Property Use, GAO-03-1015 , Sept.19,2003). Specifically, GAO examined the extent to which (1)Justice has updated its guidelines for the order to reflect changes in case law and issued supplemental guidelines for the four agencies, (2) the four agencies have complied with the specific provisions of the executive order, and (3) just compensation awards have been assessed against the four agencies in recent years. Justice has not updated the guidelines that it issued in 1988 pursuant to the executive order, but has issued supplemental guidelines for three of the four agencies. The executive order provides that Justice should update the guidelines, as necessary, to reflect fundamental changes in takings case law resulting from Supreme Court decisions. While Justice and some other agency officials said that the changes in the case law since 1988 have not been significant enough to warrant a revision, other agency officials and some legal experts said that significant changes have occurred and that it would be helpful if a case law summary in an appendix to the guidelines was updated. Justice issued supplemental guidelines for three agencies, but not for Agriculture because the two agencies were unable to resolve issues such as how to assess the takings implications of denying or limiting permits that allow ranchers to graze livestock on federal lands managed by Agriculture. Although the executive order's requirements have not been amended or revoked since 1988, the four agencies' implementation of some of these requirements has changed over time as a result of subsequent guidance provided by the Office of Management and Budget (OMB). For example, the agencies no longer prepare annual compilations of just compensation awards or account for these awards in their budget documents because OMB issued guidance in 1994 advising agencies that this information was no longer required. According to OMB, this information is not needed because the number and amount of these awards are small and the awards are paid from the Department of the Treasury's Judgment Fund, rather than from the agencies' appropriations. Regarding other requirements, agency officials said that they fully consider the potential takings implications of their regulatory actions, but provided us with limited documentary evidence to support this claim. The agencies provided us with a few examples of takings implications assessments stating that such assessments were not always documented in writing or retained on file. In addition, our review of the agencies' rulemakings for selected years that made reference to the executive order revealed that relatively few specified that an assessment was done and few anticipated significant takings implications. According to Justice, property owners or others brought 44 regulatory takings lawsuits against the four agencies that were concluded during fiscal years 2000 through 2002, and of these, 14 cases resulted in just compensation awards or settlement payments totaling about $36.5 million. The executive order's requirement for assessing the takings implications of planned actions applied to only three of these cases. The actions associated with the other 11 cases either predated the order's issuance or were otherwise excluded from the order's provisions. The relevant agency assessed the takings potential of its action in only one of the three cases subject to the order's requirements. According to Justice, at the end of fiscal year 2002, 54 additional lawsuits involving the four agencies were pending resolution.
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued soon after transmittal of the president’s budget, provide a direct linkage between an agency’s longer term goals and mission and day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance and reduce costs in the future. HUD encourages homeownership by providing mortgage insurance through its Federal Housing Administration (FHA) for about 7 million homeowners who otherwise might not have qualified for loans, as well as by managing about $508 billion in insured mortgages and $570 billion in guarantees of mortgage-backed securities. It also makes housing affordable for about 4 million low-income households by insuring loans for multifamily rental housing and providing rental assistance. In addition, it has helped to revitalize over 4,000 localities through community development programs. To accomplish these missions, HUD relies on the performance and integrity of thousands of mortgage lenders, contractors, property owners, public housing agencies, communities, and others to administer its programs. This section discusses our analysis of HUD’s performance in achieving the selected key outcomes and the strategies the agency has in place, particularly in regard to strategic human capital planning and information technology, for accomplishing these outcomes. In discussing these outcomes, we have also provided information drawn from our prior work on the extent to which the agency provided assurance that the performance information it is reporting is credible. HUD’s performance report shows that progress was made toward the outcome of increasing homeownership. For example, HUD reports that it exceeded its goals in fiscal year 2000 for increasing the national homeownership rate to 67.7 percent (compared with the target of 67.5 percent), homeownership in central cities to 51.9 percent (compared with the target of 51.0 percent), and the homeownership rate among families with incomes below the area median to 52.2 percent (compared with the target of 52 percent). However, HUD’s contribution to the achievement of those goals is not clear because (1) HUD’s discussion of external factors that affect homeownership in its annual performance and strategic plans said that HUD has limited control of homeownership rates, and (2) HUD did not achieve its goals for some of its programs that support homeownership. The report does not explain why HUD believes it contributed significantly to the overall increase in homeownership, even though it did not meet its programmatic goals. In discussing the external factors, the report noted that the record homeownership rate depended in large part on the overall economy, including low interest rates. This statement is consistent with others HUD has made that it is not a dominant player in the homeownership market and has limited impact on whether some national goals are met, including the homeownership goal. Nevertheless, the report, also states that HUD’s programs contributed significantly to the achievement of the increases in homeownership; how this contribution is distinct from the significant external factors that affect achievement of this goal remains unexplained. HUD did not achieve some of its specific programmatic goals. For example, the report cites the Federal Housing Administration (FHA), Government National Mortgage Association (Ginnie Mae), Community Development Block Grant (CDBG), and HOME Investment Partnership Grant programs as supporting HUD’s homeownership objectives, but HUD did not achieve its goals related to those programs. For example, HUD reported that FHA did not meet its planned goal of processing 1.26 million single-family mortgage endorsements; actual performance was 921,283. Furthermore, Ginnie Mae did not achieve its goal to securitize 95 percent of single-family FHA and Veterans Administration loans; actual performance was 86.2 percent of eligible loans. The report does not attempt to explain the connection between these results, which were less than expected, and the reported increase in the overall homeownership rates. HUD’s reasons for not achieving some of its goals are inadequate. For example, the reasons given for not meeting some goals appear to conflict with those cited for achieving other homeownership goals. That is, HUD states that part of the reason for not achieving the projected number of endorsements is higher interest rates that reduced demand for FHA- insured loans; lower interest rates were given as a reason for meeting the national homeownership goals stated above. Although this apparent discrepancy may be related to the timing of the analyses or methodological issues, the presentation of the information appears inconsistent, and this inconsistency is not explained. Other data necessary to evaluate HUD’s contributions toward achieving the outcome are not currently part of the report. For example, the report says that the homeownership goal was met, but it does not mention the specific numeric target of 2.8 million new homeowners since 1998 that the fiscal year 2000 performance plan said was needed to achieve an increase in the national homeownership goal or how HUD’s 2.2 million mortgage endorsements processed since fiscal year 1998 relate to that target. The report also does not present information on factors that negatively affect homeownership, such as defaults or foreclosures, that would help evaluate whether home purchasers are able to retain the homes they buy using HUD’s programs and therefore the extent to which HUD’s programs contribute to increasing homeownership. The report did not make an overall assessment of the impact of its fiscal year 2000 performance on fiscal year 2001 performance; however, it shows that HUD revised some goals for fiscal year 2001 on the basis of its fiscal year 2000 performance. In the performance report, HUD discusses the programs that support its overall homeownership objective but does not discuss its strategies, plans, actions, or time frames for achieving its unmet goals for fiscal year 2000. The fiscal year 2002 performance plan discusses strategies to increase homeownership that are clear and reasonable and generally describe the intended result. The performance plan also includes strategies that HUD will pursue to help ensure home retention and encourage responsible homeownership, which are important to sustaining homeownership levels, although no specific goals or measures were established related to those initiatives. Neither the performance report nor the performance plan discussed strategic human capital management or information technology issues as part of HUD’s strategies to address the specific outcome. The performance plan lists various evaluations and research related to the strategic goal to increase the availability of affordable housing, but it does not specifically discuss how those evaluations will be used to identify or improve strategies in the future. This outcome is related to one of HUD’s management challenges, and additional information is discussed under the section on management challenges in this report and in appendix I. HUD’s performance report indicates that it made some progress toward the outcome of increasing affordable, decent, and safe rental housing. However, the report shows that HUD was less successful in demonstrating that it was able to increase the supply of affordable housing relative to the number of people who need it most. The report shows that in fiscal year 2000 HUD was most successful in achieving the output goals that show the units inspected or properties developed. For example, HUD met its goals to (1) increase the share of units that meet physical and financial standards by 1 percent (83 and 85 percent for public housing and multifamily development respectively, exceeding the fiscal year 1999 level of 62.5 and 77.3 percent); (2) develop properties for elderly and disabled households (completing initial closings for 278 properties, exceeding the goal of 226); and (3) process multifamily mortgages (endorsing 579 mortgages exceeding the goal of 400). HUD was less successful in demonstrating results that show how its actions increased the overall quantity of affordable and decent housing compared with the number of low-income households. For example, HUD has five outcome measures related to improving the ratio of affordable units to low-income people, but HUD was either unsuccessful in meeting the targets or does not yet have the data to show the results for the measures. HUD said in the report that the physical quality of rental housing has improved greatly, but it also said that housing has become less affordable overall, particularly for poor households. HUD states that for extremely low-income households, the need for affordable rental housing has actually increased. The explanations given for not meeting the goals varied but were generally reasonable given the significant external factors that also affect achievement of this outcome. For example, two factors that HUD cited were grantees’ changing priorities and rent increases exceeding inflation due to the strong economy. The reliability of HUD’s data is generally not discussed for any outcome in the performance report; a footnote indicates that data issues are discussed in the annual performance plan. In the introduction to the report, HUD acknowledges that the performance data collection systems and controls over data quality remain areas that needs attention. While we recognize that performance data quality is an area that HUD is addressing, this information does not yet increase our confidence regarding the reliability of the performance data, as discussed in previous reports. We also identified one example where the performance data cited in the report might not be accurate, based on our work in the rental assistance area. The preliminary results of our ongoing work with the Office of Multifamily Housing Assistance Restructuring (OMHAR) indicates that HUD may not have achieved the results it reported for the mark-to-market program in fiscal year 2000. The mark-to-market program, administered by OMHAR, seeks to retain affordable rental housing and reduce Section 8 assistance costs by reducing excessive rental subsidies paid to assisted properties and restructuring their mortgages where appropriate. Under a measure that “seventy-five percent of multifamily mortgages restructured under the Mark to Market program are closed within 12 months of PAE acceptance for restructuring,” HUD reports that during fiscal year 2000 OMHAR exceeded the target by completing deals on 494 properties, or 82.7 percent of the 597 properties eligible for restructuring under the program. However, our analysis of data provided by HUD indicates that OMHAR completed only four mortgage restructurings during fiscal year 2000. Two of these mortgage restructurings were completed within 12 months of acceptance by the participating administrative entities for restructuring. OMHAR may be including rent restructurings (deals in which it reduces property rents to market levels but does not carry out a mortgage restructuring) and rent comparability reviews (activities in which OMHAR determines whether current property rents are above or below market) as part of the totals it is reporting. The performance report includes an overall discussion of the general strategies, programs, and external factors that affect HUD’s efforts to ensure that affordable rental housing is available for low-income households; but it does not discuss strategies, plans, or timeframes for achieving the specific unmet goals for fiscal year 2000. The report discusses evaluations that HUD has done to improve utilization of Section 8 housing vouchers, accuracy of rent determinations for assisted households, and accuracy of subsidy amounts paid for assisted households. However, the information would have been more useful if HUD specifically discussed how the results of the evaluations would be used to identify or improve strategies for achieving the goals in the future. HUD reports that the studies indicate that (1) further attention by management is necessary to better utilize Section 8 housing vouchers; and (2) HUD continues to pay excess rental subsidies, partly because tenant income is underreported and partly because of errors made by public housing agencies, owners, and agents responsible for program administration. The fiscal year 2002 annual performance plan discusses HUD’s strategy to strengthen its existing programs and improve usage of Section 8 housing vouchers and public housing capital funds. The specific strategies HUD outlines in the plan support these objectives and the measures related to the outcome, although they do not discuss specific tools or risk assessments that HUD will use to implement the strategies and achieve the objectives. The performance plan refers to HUD’s coordination with other federal agencies to increase affordable rental housing. However, HUD does not discuss specific coordination activities or other agencies’ specific contributions to HUD’s goals. Neither the performance report nor the fiscal year 2002 performance plan discusses human capital and information technology as strategies to achieve this outcome. This outcome is related to one of HUD’s management challenges, and additional information is provided under the section on management challenges in this report and in appendix I. HUD’s performance report indicates that the Department made some progress toward achieving this outcome. For example, HUD reports that it met its goals of increasing jobs and reducing poverty in cities, increasing capital in underserved neighborhoods, reclaiming brownfields (contaminated commercial and industrial land), reducing crime, and targeting grant funds to the most needy. HUD also noted a decrease in the number of “doubly burdened” cities during fiscal year 2000. The number of cities that were experiencing both a population loss and a high poverty rate declined from 1 in 7 in 1999 to 1 in 8 in 2000. However, the report also shows problems with the performance measures and data collected for specific programs that make it difficult for the reader to determine what HUD contributed to the achievement of the goals. HUD reports it does not have data for one performance measure and thus could not report results; two performance measures were related to programs that were never authorized, and HUD did not report results. Even for one measure that HUD achieved, the report notes that the measure was not a reliable indicator of progress because it fluctuates from year to year. HUD reports that half of the selected measures related to this outcome were revised or deleted in future performance plans. Some of the reasons given include that the measures were revised or deleted to eliminate comparisons between cities and suburbs or to improve performance measurement, such as to track performance over a 3-year period to obtain a more reliable indicator of progress. Additionally, HUD’s discussion of external factors notes the significant challenges it faces with this outcome. For example, the economic issues mitigating against this outcome, such as mismatches in the location of jobs and available workers, concentrations of poverty, and grantees’ discretionary use of block grants (they may choose to use the funds in other ways that may not contribute to specific HUD objectives), may affect the results that HUD is able to achieve. The report does not discuss how those problems will be mitigated. For this outcome, the discussion of the measures more clearly acknowledges data reliability issues and show HUD’s efforts to address them. This may be a result of a report from the OIG in March 2001 that identified several weaknesses in performance data from HUD’s community development programs, among others, which support much of HUD’s effort to improve community economic vitality and quality of life. For example, the OIG reported that projections and estimates were used to formulate program performance measurements rather than actual grantee accomplishments. In discussing the strategies that support this outcome, the performance report discusses the contributions of some of HUD’s major programs, such as CDBG, Empowerment Zones/Enterprise Communities, FHA, and the Public Housing Drug Elimination Grant Program. As previously discussed, however, it does not address whether and how unmet goals will be achieved. As noted, this may be because about half of the measures will be significantly revised or deleted in future years. The fiscal year 2002 performance plan discusses various strategies to promote relationships with communities, other federal agencies, industry groups, and nonprofit organizations to achieve this outcome, but it does not describe specific tools or means of doing this. For example, the plan includes some strategies that state HUD will encourage communities to address economic issues. The plan does not describe how HUD will do this, even though it notes that HUD’s direct impact on specific and measurable results is somewhat limited. Neither the performance report nor the fiscal year 2002 performance plan discusses human capital and information technology as strategies to achieve this outcome. The plan does not discuss the use of evaluations to identify or improve strategies for achieving future goals. As we reported last year, HUD’s strategic and annual performance plans do not contain strategic goals or objectives for reducing fraud, waste, and error in HUD programs. However, HUD has a strategic goal to “ensure public trust” to improve its operations and address its management challenges. Under this strategic goal, HUD includes performance measures to achieve more accurate subsidy payments, better data, and better quality housing, thereby reducing fraud, waste, and error. HUD’s progress toward meeting the outcome of less fraud, waste and error during fiscal year 2000 is not clear based on the results for these measures. The performance report shows that HUD met its targets for some significant measures, such as increasing the share of assisted housing units that met HUD housing standards, the number of grantees reviewed, and the reporting of tenant information in a data system. However, the report also shows that HUD did not achieve its goals for other measures, such as setting baselines for reducing the share of public housing and Section 8 units managed by troubled housing authorities, reviewing single-family appraisals, and developing a data quality plan. This information may result in a somewhat more negative view of HUD’s accomplishments in reducing fraud, waste, and error than is warranted because the 1-year assessment does not consider the progress that HUD has made in recent years to overcome some of its long-term management deficiencies, and its plans for future improvements. The performance report generally provides reasonable explanations of why some goals were not met and how they would be revised in the future. For example, although we report that HUD did not meet several performance measures based on the targets set in the fiscal year 2000 performance plan, HUD reports that it considers the targets “substantially met” because the performance was very close to the target. HUD reported that it “substantially” met the targets for determining the number of public housing units managed by troubled housing authorities, determining the number of multifamily properties with substandard financial management, and matching tenant income data to tax and Social Security records. The report states that these were one-time measures to track the implementation of specific processes designed to improve HUD operations. Because implementation was successful, HUD states that these measures would not be reported on in the future. Additionally, the plan states that some of the measures were dependent upon two of HUD’s assessment activities that were delayed, one of which was delayed at the request of the Congress. The report noted that over half of the performance measures used in fiscal year 2000 would be significantly revised or deleted in 2001 or 2002. The performance report discusses strategies to achieve HUD’s goal of ensuring public trust and links some of those strategies to the achievement of specific performance measures related to this outcome. It does not, however, specifically address plans and time frames for achieving unmet goals. As stated above, several of the unmet measures were one-time measures to track implementation of systems or processes. A substantial number of others will be revised in future years on the basis of performance or data issues, so there would be no need to discuss strategies for achieving these goals. The performance report includes some human capital and information technology strategies under its discussion of external factors that affect achievement of its efforts to restore public trust. The report notes that HUD will continue implementing a resource estimation and allocation process to aid in managing workload, improve risk-based monitoring techniques, continue to push for simplification of rental subsidy program requirements, and work with HUD managers to ensure that systems requirements and data quality controls are properly established. The discussion of strategies contained in the 2002 performance plan related to this outcome are somewhat better than those for the other outcomes because they discuss more specific steps to be taken. They also include some actions the Department plans to take if public housing authorities, lenders, or others do not comply with policy. However, we noted that the plan could be improved by including strategies, goals, or measures that would specifically address the prevention and detection of fraud, waste, and errors in HUD programs. This would also include a risk assessment process to identify the most vulnerable programs and institute internal controls to prevent and detect occurrences. The performance plan’s strategies also address some of HUD’s human capital issues, such as training employees, completing its resources estimation and allocation process, developing a long-term staffing strategy to deal with expected retirements, and continuing to develop a performance-based appraisal process. However, the plan does not include measures to determine the effectiveness of those strategies. The strategies also include efforts to improve HUD’s information technology, such as improving equipment for higher productivity, improving data quality, and increasing citizen access to information through electronic means such as HUD’s Web site. Both we and the OIG have raised concerns about HUD’s strategic human capital management and information technology issues, and additional information is provided under the section on management challenges in this report and in appendix I. The annual performance plan lists a number of significant evaluations planned or currently under way pertaining to the goal of ensuring public trust, such as an evaluation of aspects of HUD’s 2020 management reform and efforts to improve the quality of assisted housing. The studies will be useful for evaluating HUD’s overall progress in improving its operations and customer service. For the selected key outcomes, this section describes major improvements or remaining weaknesses in HUD’s (1) fiscal year 2000 performance report in comparison with its fiscal year 1999 report, and (2) fiscal year 2002 performance plan in comparison with its fiscal year 2001 plan. It also discusses the degree to which the agency’s fiscal year 2000 report and fiscal year 2002 plan address concerns and recommendations by the Congress, GAO, the OIG, and others. HUD revised its performance report for fiscal year 2000 consistent with the new format developed for its annual performance plans beginning in fiscal year 2000, which included revised goals, objectives, and many new measures. This improved the linkage to the annual performance plan and, therefore, the readability and clarity in presentation of the fiscal year 2000 performance report, compared with the fiscal year 1999 report. HUD combined the performance report with its fiscal year 2000 annual accountability report, which consolidated a substantial amount of important information on HUD’s operations and activities into one document. The report summarizes HUD accomplishments for each strategic goal and includes discussions of the major programs and the most significant performance measures, general strategies, and external factors that affect the achievement of the strategic goals. The report also includes a summary of the management problems identified by the OIG and HUD’s response to each of the remaining concerns; however, the report includes only a limited discussion of the current management challenges we identified. In our opinion, the performance report could be further improved if it included a discussion of resource issues, an overall assessment of HUD’s progress toward achieving its strategic goals for the fiscal year, and a more specific assessment of the completeness and reliability of performance data. The performance report did not provide an overall assessment of HUD’s progress toward achieving its strategic goals and objectives for fiscal year 2000. The report shows that HUD did not achieve many of the results expected, but no assessment was made on the significance of not achieving the desired results to the overall achievement of the strategic goals. As discussed above, where targets were not met, usually no explanation was provided of what or whether anything could be done to meet those goals in the future. For example, even in cases where the rise of interest rates or other economic factors were given as reasons for not achieving the desired results, a more comprehensive assessment would have been useful to either identify any other underlying problems or demonstrate that the result was reasonable given the influence of external factors. In general, some insight into what kinds of problems were experienced would be useful to the Congress and other decisionmakers as they consider HUD’s budget and other requests. Although the performance report made some additional improvements in discussing data issues, more needs to be done to improve the usefulness of the performance information. The Reports Consolidation Act of 2000 requires that the GPRA performance report contain an assessment of the completeness and reliability of performance data. In the introduction to the performance report, HUD states that it needs to focus on improving performance data collection systems and controls over data quality, particularly in the formula and discretionary grant programs. However, the report does not provide an overall assessment of the performance data. The HUD OIG reviewed a sample of the performance data reported in 1999 and found problems with its reliability. We noted examples where the fiscal year 2000 performance report seemed to address the concerns raised by the OIG. For example, the report articulates performance measures for which the results are “snapshots” rather than performance for the complete fiscal year, and are estimates rather than actual performance. However, we identified discrepancies in other information in the report, suggesting that more work remains to be done. For example, as in the OMHAR example discussed above, the data in the report do not agree with other data HUD provided to us. We identified other performance measures in the report where the methodology for charting progress changed or a baseline that was supposed to have been set was not, but no information was provided on why such changes were made or why they were significant. The fiscal year 2002 performance plan continues to improve over the 2001 performance plan. HUD revised its fiscal year 2002 performance plan to reflect the updated strategic plan for fiscal years 2000 to 2006, adding one new strategic objective and revising several others to cover the Department’s activities more fully. The plan also includes additional information on the Department’s resources, expands the discussion of evaluations, creates a new type of measure that will be used for monitoring purposes only, and includes information on actions to address HUD’s human capital challenges. For example, the plan refers to research being done on the outcomes in areas of concentrated CDBG investment that will be used to shape the performance measures. The plan also adds a measure that will track implementation of HUD’s resource estimation and allocation process. However, we noted that the discussion of how resources were linked to the achievement of performance goals, the credibility of performance data, and the coordination strategies could be improved. Additionally, goals or measures for HUD’s management challenges would be useful for tracking HUD’s progress toward resolving those specific issues. In its performance plan, HUD includes a table for each strategic goal that shows the programs, budget authority, and staff levels that HUD estimates generally support the achievement of that goal. Although this table is a step toward showing how budgetary and human resources relate to achieving goals, the information becomes somewhat confusing because HUD also includes a second table for the underlying strategic objectives that conflicts with the first table. For example, one table shows that HUD allocated $1.585 billion of CDBG funds to the overall strategic goal of increasing the availability of decent, safe, and affordable housing in fiscal year 2002. A table for an underlying objective to increase homeownership shows that an estimated $4.802 billion of CDBG funds is allocated to achieve the homeownership objective in fiscal year 2002. Hence, it seems that the underlying objective requires a larger portion of the budget than the primary strategic goal that it supports. A footnote for this table indicates that allocations at the underlying objective level are not currently available; but the table would have been clearer if HUD either used the estimated allocations by program that were in the first table, or left out the information completely until better estimates can be developed. As discussed above, HUD has made it a priority to address the reliability of its performance data. Although the plan continues to improve the discussion of limitations to the performance data and discusses HUD’s plans to improve its data quality, data credibility is an area we will continue to monitor. HUD also notes that it has plans to review concerns about performance measure data in response to a recommendation by the National Academy of Public Administration. In a July 1999 study of HUD’s compliance with GPRA, the Academy recommended that HUD develop a plan that outlines a clear, departmentwide data quality goal with minimally acceptable data quality standards for key elements, such as timeliness, reliability, and accuracy. The Academy noted that the HUD’s quality assurance approach did not have data quality standards, a plan for verifying data quality, or assigned roles and responsibilities for those involved in the quality work. We have previously discussed with HUD the usefulness of including goals and performance measures in the performance planning documents that show HUD’s progress toward resolving its management challenges. The 2002 annual performance plan takes steps toward this by adding measures and strategies that address some aspects of the management challenges, but goals or measures that focus on the specific challenges that we and the OIG have identified would be useful. For example, we have identified HUD’s single-family insurance program and rental assistance programs as high risk. Measures designed to address HUD’s progress on the issues related to the challenges, such as monitoring lenders, overseeing appraisers, managing single-family properties, or reducing excess subsidy payments, would be useful in assessing HUD’s progress toward resolving the management challenges. The one area in which HUD appears to have weakened since the 2001 performance plan is in its coordination with other federal agencies. The 2002 performance plan does not discuss some coordination activity that was in the 2001 report. For example, under the objective to increase homeownership, HUD does not mention its coordination with the Department of Agriculture’s (USDA) or the Veterans Administration’s housing programs. Also, we reported on the overlap between USDA’s Rural Housing Service and FHA’s single and multifamily programs and suggested that Congress consider requiring the two agencies to examine the benefits and costs of merging those programs. In its response to our report, HUD disagreed with the suggestion to merge the single family programs but said opportunities to improve delivery of rural housing services should be explored. These activities are not discussed in the performance plan. Additionally, some indicators that were previously identified as potential interagency indicators are not so designated in the 2002 performance plan. The plan does not provide information on HUD’s reason for dropping these items or discuss the impact, if any, on HUD’s achievement of its goals. We have also reported on the importance of interagency coordination on community growth issues and that opportunities exist for federal agencies to improve their coordination. We reported that the large number of federal programs that fund economic development activities and the large number of federal agencies that administer the programs can reduce communities’ flexibility in pursuing reinvestment projects. HUD’s programs were among those most frequently mentioned as helpful to community revitalization, but the plan makes limited mention of HUD’s coordination activity with other agencies on these interagency issues. In general, even where coordination activity is discussed, the plan does not discuss the specific contribution of other agencies to the achievement of HUD’s goals. We have identified two governmentwide high-risk areas: strategic human capital management and information security. Regarding human capital, we found that the fiscal year 2000 performance report does not explain HUD’s progress in resolving human capital challenges. However, it does provide information on the steps HUD is taking to implement a resources estimation and allocation process and develop a framework to begin addressing human capital issues. We found that HUD’s fiscal year 2002 annual performance plan does not include specific goals or measures to address strategic human capital management; however, it has three performance measures pertaining to human capital issues. These three measures are that (1) HUD employees are more satisfied with the Department’s performance and work environment, (2) HUD will implement the new resource estimation and allocation process, and (3) HUD will improve the diversity of the workforce. With respect to information security, the agency’s performance report discusses the status of a new Enterprise Security Program that the Chief Information Officer is creating. The report states that this program will provide adequate security measures and safeguards to protect information resources from unauthorized access, use, modification, and disclosure. Although the performance report does not specifically address this high-risk area, it states that during fiscal year 2000, HUD developed policies, wrote a handbook, and created a training program for a Critical Infrastructure Assurance program. We found that HUD’s performance plan does not have goals and measures associated with information security, although we have had open recommendations on this issue since 1994. In January 2001, we reported on three long-standing, major management challenges facing HUD. We concluded that continued improvements were needed to reduce HUD’s single-family insurance risk and to ensure that HUD’s rental housing assistance programs are used effectively and efficiently. We designated these two programs as high-risk areas for HUD. We also reported that HUD needs to resolve its information and financial management systems and human capital issues. In reviewing HUD’s fiscal year 2000 annual performance report, we found that the report does not provide goals or measures for addressing the management challenges we identified, but it does state that the high-risk areas are a top priority for the Secretary. The report also includes some measures that relate to the issues we have raised in the single-family, rental assistance, and financial management systems and human capital areas. For example, the report has measures to increase the use of loss mitigation tools to reduce the number of foreclosures (which reduces FHA’s insurance costs) and to increase the net recovery on sales of single-family real estate owned. However, the report does not include goals or measures related to other aspects of the single-family program that we have been concerned about, such as lender oversight. Two of these management challenges are related to outcomes discussed in this report (increased homeownership and increased affordable, decent, and safe rental housing). HUD’s fiscal year 2002 performance plan does not contain goals and measures to resolve the three management challenges we identified; however, the plan contains some measures related to aspects of the single- family, rental assistance, and information and financial management and human capital issues. For example, for the rental assistance management challenge, the public trust strategic goal includes measures to increase the share of assisted housing units that meet physical and financial standards and to improve tenant income verification. The performance plan also includes a separate section under its public trust strategic goal that discusses HUD’s management challenges. The section discusses the management challenges identified by us and states that addressing the long-standing management challenges is a top priority for the Secretary in fiscal year 2002. It also discusses HUD’s recent progress and the activities planned or being implemented that are expected to yield future improvements. The plan notes that the Department has corrective action plans to address the management challenges identified by us. It also states that HUD will use the performance measures established for its strategic goal of ensuring public trust to track the results of its management improvements and identify where further improvements are needed, although it does not identify those measures. However, as discussed above, strategies, goals, or measures specific to resolving the management challenges would improve the plan. See appendix I for a summary of the major management challenges and related measures. HUD prepared a performance report that is much improved from last year and made additional improvements to its fiscal year 2002 performance plan. Generally, the documents are understandable and well organized. However, HUD did not achieve all of the performance measures for the four key outcomes. For some performance measures that were achieved, specifically for the homeownership outcome, the report did not clearly explain how HUD’s programs contributed to achieving its goals, given the significant external factors discussed. The report would be more useful to Congress and other decisionmakers if it more clearly articulated HUD’s overall progress toward achieving its goals, including identifying the specific contributions it makes distinct from external factors or other contributors. HUD has continued to revise and improve the performance plan; but the quality of strategies for achieving goals varied by outcome and generally did not include strategies to mitigate the external factors, resource information was somewhat confusing, and some coordination discussions from prior years were eliminated. The performance plan would benefit from further discussion of HUD’s strategies for achieving the goals; the estimated resources needed to achieve the goals; and coordination with external partners, specifically other federal agencies. Although the plan and report discuss HUD’s management challenges and contain some performance measures that pertain to those challenges, developing specific performance measures and strategies would serve to focus HUD’s efforts to improve its management and ensure accountability in its programs. The performance plan would be more useful if it included goals, measures, and strategies to address HUD’s management challenges and its efforts to reduce fraud, waste, and error, along with a risk assessment process to identify the programs most vulnerable to fraud, waste, and error. Finally, the report includes statements about HUD’s plans and efforts to improve the accuracy of its data. To increase confidence that the results HUD reported accurately and fairly represent its achievements and to fully comply with the Reports Consolidation Act, the performance report should contain a specific assessment of the completeness and reliability of HUD’s performance data. We recommend that the Secretary of Housing and Urban Development consider the following improvements to future performance plans and reports: Include sufficient information in the performance report to evaluate HUD’s accomplishments, including an overall assessment of HUD’s progress toward achieving its goals, identification of HUD’s specific contributions to achieving the goals, and determination of the contributions of other entities to HUD’s goals. Continue improving the performance plan by better estimating the resources necessary to achieve the goals, articulating strategies to achieve the goals and mitigate the problems encountered, and further discussing coordination strategies with other federal agencies. Include sufficient goals, measures, and strategies to demonstrate HUD’s efforts and progress in addressing its management challenges. In support of HUD’s efforts to continue improving its management, future performance plans would benefit from the inclusion of goals, measures, or strategies to assess the prevention and detection of fraud, waste, and error, as well as a risk assessment process to identify the most vulnerable programs. Include an assessment of the completeness and reliability of performance data that clearly articulates the implications of relying on that data to evaluate HUD’s achievements. As agreed, our evaluation was generally based on the requirements of GPRA; the Reports Consolidation Act of 2000; guidance to agencies from the Office of Management and Budget (OMB) for developing performance plans and reports (OMB Circular A-11, Part 2); previous reports and evaluations by us and others; our knowledge of HUD’s operations and programs; our identification of best practices concerning performance planning and reporting; and our observations on HUD’s other GPRA- related efforts. We also discussed our review with the HUD’s OIG and obtained written comments from HUD. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member of the Senate Committee on Governmental Affairs as important mission areas for the agency, and three of the four reflect the outcomes for HUD’s major programs or activities. The major management challenges confronting HUD, including the governmentwide high-risk areas of strategic human capital management and information security, were identified by us in our January 2001 Performance and Accountability Series and High-Risk Update. HUD’s OIG identified its top management challenges in December 2000. We did not independently verify the information contained in the performance report and plan, although we did draw from our other work for assessing the validity, reliability, and timeliness of HUD’s performance data. We conducted our review from April 2001 through June 2001 in accordance with generally accepted government auditing standards. We provided HUD a draft copy of this report for review and comment. HUD generally agreed with the information presented in our report. Overall, HUD found the report to be balanced and useful for both recognizing the significant progress that the Department has made and pointing out areas where more progress is needed. In general, HUD concurred with the need for further analysis of the Department’s role in meeting specific performance indicators, as well as larger strategic objectives and goals, specifically in the homeownership area, and with the need to develop more representative performance measures, without necessarily increasing the total number of indicators. While agreeing with most of the report, HUD identified some areas that it believed should be clarified. For example, HUD disagreed with our statement that it generally did not provide strategies for achieving its unmet homeownership goals because some of the goals were not achieved for reasons beyond their control and some information on prospective strategies that would address the issue was discussed. We clarified the report to address most of the issues that were raised by HUD. In addition, we incorporated HUD’s technical comments where appropriate. HUD’s comments and our detailed responses are in appendix II. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Secretary, Department of Housing and Urban Development; and the Director, Office of Management and Budget. Copies will also be made available to others upon request. If you or your staff have any questions, please call me at (202) 512-7631. Key contributors to this report were Shirley L. Abel, Steven L. Cohen, Jeannie B. Davis, Mark H. Egger, David G. Gill, Danielle P. Hollomon, Bonnie J. McEwan, Sally S. Moino, John T. McGrail, and Kirk D. Menard. The following table identifies the major management challenges confronting the Department of Housing and Urban Development (HUD), which includes the governmentwide high-risk areas of human capital and information security. The first column lists the management challenges that we and/or HUD’s Office of Inspector General (OIG) have identified. The second column discusses the progress, as discussed in its fiscal year 2000 performance report, HUD made in resolving its challenges. The third column discusses the extent to which HUD’s fiscal year 2002 performance plan includes performance goals and measures to address the challenges that we and HUD’s OIG identified. We found that HUD’s performance report discusses the management challenges identified by the OIG, as authorized by the Reports Consolidation Act, but does not specifically discuss the agency’s progress in resolving the management challenges we identified in January 2001, other than to note that they were a high priority for HUD management. Of the agency’s 15 management challenges, its performance plan had measures that were directly related to 10 of the challenges, had measures that were indirectly applicable to 2 challenges, and had no goals or measures related to 3 other challenges. The following are GAO’s comments on the Department of Housing and Urban Development’s letter dated June 21, 2001. 1. To address HUD’s concerns that our statements about strategic human capital management and information technology strategies appear contradictory, we clarified the report to emphasize that strategic human capital management and information technology were not discussed as strategies related to the programmatic outcomes selected for this review. However, we recognize they are discussed as part of HUD’s efforts to address its management challenges and achieve its goal of ensuring public trust as shown in other parts of the performance report. 2. We believe the report recognizes that HUD has included strategies and performance measures that address aspects of reducing fraud, waste, and error in its programs, and our analysis of HUD’s progress included most of the measures HUD cites in its letter. Our intent was not to say that HUD did not have such measures but that our analysis indicated that the results did not present a clear picture of HUD’s progress toward achieving this outcome. Our analysis of the selected measures, including most of those mentioned in HUD’s comments, showed that for all but five, HUD set baselines, modified the measure, did not achieve the results expected, or will not set baselines until fiscal year 2002 so that performance related to those measures can not yet be assessed. We clarified the report to emphasize that HUD’s progress toward reducing fraud, waste, and error is not clear based on the results reported as of fiscal year 2000. However, as stated in our report, we believe that the strategies, goals, and measures, (and therefore the plan) could be improved by specifically including a focus on preventing and detecting fraud and errors. Additionally, this would be in line with our requester’s interest in seeing an outcome related to reducing fraud, waste, and error in HUD programs. This report contains a reference to our exposure draft on this subject; we have agreed to discuss with HUD ways of developing goals and measures that more specifically measure HUD's progress in reducing fraud, waste, and error in future performance plans. 3. We recognize that in some places the performance report notes where unmet measures would be revised or deleted, or other actions would be taken in the future, although we did not specifically list all of those situations. However, there are also places where unmet goals provide the opportunity to evaluate HUD’s performance and future actions, particularly where significant external factors affect its ability to achieve its goals as planned. These external factors do not release HUD from the responsibility to develop strategies to mitigate the extent of those external factors or to determine whether or how the goals and measures should be revised to compensate for those factors. In fact, the extensive external factors the Department faces (also see comment 4) in trying to achieve its goals make it especially important that HUD have these strategies to address its unmet goals, and evaluations may be useful for that purpose. This comment specifically refers to our observation about the homeownership outcome for which HUD notes the external factors are particularly complex. 4. We agree that the issue of determining the impact of an agency’s role in achieving goals for which there are many external partners and economic influences is complex. We have agreed to discuss this issue with HUD, and we also encourage the Department to consult with OMB and the Congress on the most appropriate strategies and means for the Department to measure the impact of its programs. (See also comment 3.) 5. We appreciate the Department’s concerns about accurately portraying data reliability issues, and we clarified the discussion to (1) emphasize that HUD also recognizes that more work remains to be done in this area, and (2) this is one example of our concerns. However, we did not change the example because the performance report does not explain the substantial difference between what the measure says it will achieve and the results reported. The report is written in such a way that only those familiar with this program would detect the discrepancy and understand the possible reason for the difference. Until we are assured that the performance reports show results based on accurate data for all the key outcomes and that the results presented are a fair representation of HUD’s activities, we will continue to report concerns about the reliability of the performance data. 6. We clarified the report, where appropriate, to make it clear in the examples cited that we are not saying that HUD did not achieve specific numeric goals but that HUD did not achieve its goals to set baselines in fiscal year 2000 as planned. For the purposes of our analyses, we considered a goal to set a baseline by the same criteria as other measures that were to achieve a specific numeric target. We believe this is an important issue because until baselines are set, HUD cannot measure its progress toward achieving its goals. 7. The intent of our observation was not necessarily to recommend that HUD develop more measures but develop goals and measures that focus more on resolving the specific management challenges. We clarified the report to emphasize that the goal is to develop measures that effectively address the specific management challenges. We have agreed to consult with HUD on possible ways of developing goals and strategies that reflect the management challenges; but we also suggest that HUD consult with the Congress and OMB on this issue, particularly since OMB’s Circular A-11 requires goals that address agency management challenges. Such goals, according to OMB, often will be expressed as milestone events for specific remedial steps. 8. We clarified our report to note that the performance report includes measures that relate to aspects of HUD’s 2020 Management Reform, such as for the new centers, but it generally does not include goals that relate to the specific issues raised by the OIG. 9. Our observation on HUD’s performance measure to obtain clean audit opinions was included in the report to explain why we did not include this measure in our analysis of HUD’s efforts to address its management challenge related to financial systems. Although we do not consider it a measure that helps evaluate HUD’s progress toward improving its systems or addressing it management challenges, this should not be construed as a criticism of HUD’s decision to include the measure in its GPRA documents, if it finds the measure useful. Achievement of a clean audit opinion is an important milestone; however, until HUD addresses the Inspector General’s concerns about HUD's integrated financial management systems and FHA’s general ledger, it will not be in a position to provide reliable, timely information on a day-to-day basis to support ongoing management and accountability.
This report reviews the Department of Housing and Urban Development's (HUD) fiscal year 2000 performance report and fiscal year 2002 performance plan to assess the agency's progress in achieving selected key outcomes important to the agency's mission. GAO found that although HUD did not attain all of the goals pertaining to the selected key outcomes in its fiscal year 2000 annual performance plan, the performance report shows that HUD made some progress toward achieving the outcomes. However, HUD's progress varied for each outcome, and the information presented in the performance report does not always provide enough information for the reader to evaluate HUD's contribution to achieving the outcome. In general, HUD's strategies for achieving these outcomes appear to be clear and reasonable.
DOD’s contract administration and payment processes involve numerous organizations, including 23 DFAS offices; the contractors that perform work and bill the government; the Defense Contract Management Agency (DCMA), which administers most of DOD’s largest procurement contracts; military components’ project and contracting offices; and DCAA, which reviews contractors’ records, internal controls, and billing systems. The contract administration and payment processes have been described in our prior reports. If DCMA administers a contract, DFAS Columbus makes payments using the Mechanization of Contract Administration Services (MOCAS) system, and DOD refers to the disbursements as “contract pay.” In fiscal year 2001, DFAS Columbus disbursed about $78 billion for over 300,000 contracts managed in MOCAS. Contracts that DCMA does not administer are paid using other systems at the DFAS offices. DOD refers to these contract disbursements as “vendor pay.” The Navy’s shipbuilding and repair contracts are viewed as vendor pay because they are not managed by DCMA or paid by DFAS Columbus using MOCAS. In fiscal year 2001, DFAS processed over 10 million vendor pay invoices, valued at over $59 billion. Large contractors with numerous DOD contracts and locations can receive both contract pay and vendor pay. Effective April 1, 2001, DFAS contract pay and vendor pay management were consolidated at DFAS Columbus. Contract payments involve payments for the delivery of goods and services and financing payments. Financing payments include (1) progress payments to cover a contractor’s costs as they are incurred during the construction of facilities or the production of major weapons systems and (2) performance-based payments that are based on the accomplishment of particular events or milestones—typically used on production contracts. When contractors deliver items and submit invoices for the delivered items, DFAS Columbus deducts financing payments from the prices of the delivered items, or “liquidates” the financing payments, based on a predetermined liquidation rate. Liquidation rates may be adjusted when costs are higher or lower than projected. Contract modifications can often occur due to changes in liquidation rates, quantities ordered, and production schedules. These modifications and other contract administration actions are managed by DCMA through coordination with the contractors. Our prior reports on contractor overpayment problems have highlighted that (1) Defense contractors were refunding hundreds of millions of dollars to DFAS Columbus each year, (2) DFAS Columbus had made overpayments due to duplicate invoices and paid invoices without properly and accurately recovering progress payments, (3) contract administration actions had resulted in significant contractor debt or overpayments, (4) DOD and contractors were not aggressively pursuing the timely resolution of overpayments or underpayments when they were identified, (5) DFAS Columbus did not have statistical information on the results of contract reconciliation, and (6) DOD has ongoing actions to address contractor payment problems. DFAS Columbus can identify overpayments during contract reconciliation or can be notified of an overpayment by a contractor or DCMA. When DFAS Columbus becomes aware of an overpayment, its Accounts Receivable Branch is to issue an initial demand letter to the contractor and works with the entitlement divisions, which process and pay invoices, to collect amounts due the government by initiating an “offset” that reduces the amount paid on invoices in process. The Accounts Receivable Branch also maintains the detailed records on the accounts receivable. The contract reconciliation function deals with contracts being closed out as well as active contracts needing partial audits to resolve prior and current payment problems, such as overpayments, underpayments, and invoice deficiencies. Since the mid-1990s, DFAS Columbus has contracted with a major accounting firm to help reconcile and close out thousands of contracts. When an amount exceeding $600 is due to the government and is not collected within 90 days, the debt is supposed to be transferred to DFAS’s central Debt Management Office (DMO) for further collection actions that can include referring debts to the Defense Criminal Investigative Service or the U.S. Treasury’s centralized debt collection programs. Within the last 6 months, three key actions have been taken to address governmentwide issues on improper payments. As previously stated, in December 2001, the Congress amended Title 31 of the United States Code to require an agency with contracts totaling over $500 million to have a cost-effective program for identifying payment errors and for recovering amounts erroneously paid to contractors. The resources for implementing the recovery program can include the agency, other federal departments or agencies, and the private sector. As of March 2002, the Office of Management and Budget had not issued implementing guidance on this law. Another key action was our issuance of an executive guide, in October 2001, that discusses strategies and control activities, such as recovery auditing, contract audits, and data mining, to identify and correct improper payments. The third action, in response to our recommendation, was a change in the FAR, effective February 19, 2002, which added a paragraph to the “prompt payment” clause of contracts that requires a contractor to notify the contracting officer if the contractor becomes aware of an overpayment and to request instructions for disposition of the overpayment. Defense contractors’ responses to our survey indicated that they have millions of dollars in overpayments and underpayments on their records, and based on DFAS Columbus records, they are continuing to refund overpayments. According to the contractors, a primary reason for these payment discrepancies was progress payment liquidation errors. Contractors usually did not include contractor debt arising from contract administration actions in the overpayments amounts they reported. However, according to DFAS Columbus records, contract administration actions were the primary reason for the $488 million that contractors refunded in fiscal year 2001. Contractor refunds are likely to continue because of DOD’s complex contract management and payment processes. Also, even when payment discrepancies are identified, they are not always promptly resolved. Based on information from our surveys, business units of 67 contractors reported that they had about $62 million of overpayments and about $176 million of underpayments in their records as of September or October 2001. We sent surveys to 497 business units of 183 contractors, and at least 249 of the business units of 120 contractors responded. The business units of the remaining 53 contractors that responded did not report any overpayments or underpayments in their records. Appendix II provides the details of our survey results and appendix III contains the survey. As shown in table 1, which summarizes the results of the survey, 34 of the largest contractors did not respond to the survey. The 10 largest contractors—according to their reported annual billings—that responded to our survey reported about 86 percent of the total overpayments and 78 percent of the total underpayments reported by all of the contractors. One Boeing contract alone accounted for $27 million of underpayments due to contract funding issues. Contractors cited DFAS progress payment liquidation errors as a primary reason for the overpayments and underpayments. Contractors usually did not include contractor debt arising from contract administration actions in the overpayment amounts they reported. However, as discussed later in this report, contract administration actions are the primary source of contractor refunds. Most of the contractors reported that demand letters had not been issued for the overpayments in their records but that they planned to return about 29 percent of the overpayments by check or offset. For the remaining 71 percent of the overpayments, contractors did not indicate any planned actions. According to DCAA, it plans to ensure that all of the overpayments are properly handled. These survey results cannot be used to determine the extent of overpayments and underpayments in contractor records at a point in time. Specifically, the results cannot be projected to a universe of DOD contractors even if all contractors had reported because the surveyed contractors were not randomly selected. Further, our analysis of contractors’ responses showed inaccuracies in their reporting. For example, at least 3 contractors included information on contract payments to federal agencies other than DOD and even to nonfederal entities. At least 1 contractor reported overpayments that had been refunded the government and, therefore, should not have been reported because they were not outstanding overpayments. We also observed that contractors were not consistent in reporting actions taken on overpayments. Out of at least 588 reported overpayments, contractors indicated in their responses that they had notified DOD of 251 overpayments. At the same time, contractors reported that they had not notified DOD of 236 overpayments. For the remaining 101 overpayments, contractors did not provide any indication one way or the other. Based on factors such as contractor size, geographic location, and amount of reported payment discrepancies, we selected, in coordination with DCAA, 27 contractor locations involving 24 companies to test the validity of contractors’ responses. The selected business units reported $58 million, or 94 percent, of the total reported overpayments and $126 million, or 71 percent, of the total reported underpayments. We provided our survey results to DCAA for these business units, and its auditors found that 10 of the contractor locations underreported overpayments by $57.2 million and underreported underpayments by $58.0 million. DCAA stated that the remaining 17 locations had reported accurately. According to DCAA, 5 contractors are primarily responsible for these large differences and most of the differences are due to contractors’ billing weaknesses and erroneous reporting of unpaid invoices. Appendix IV summarizes the DCAA results. At these same 10 contractor locations, DCAA also found that (1) differences in application of liquidation rates and weaknesses in contractor billing systems were primary reasons for both overpayments and underpayments and (2) unpaid invoices over 30 days old were another primary reason for underpayments. Such results demonstrate the need for recovery audit programs, such as DCAA’s contractor overpayment and underpayment audits to identify payment errors, recover erroneous payments, and identify payments due to the contractor. During its follow-up to our surveys, DCAA also found that contractors are reluctant to return overpayments on a contract when underpayments on the same contract result in a net underpaid status. For example, 1 of the largest DOD contractor’s records showed a contract with millions of dollars of overpayments and underpayments that resulted in a $1.8 million net underpaid status. According to the contractor’s records, some of these overpayments had occurred in 1997 and had not been resolved. As of September 30, 2001, the contract was in reconciliation, and, as of February 2002, these payment discrepancies had not been resolved. According to DCAA officials, they decided, after consulting with DFAS, that settlement of any overpayments and underpayments would be deferred until contract reconciliation is complete and the final contract amount is settled. DCAA officials stated that the contractor had basic internal controls for overpayments, and DCAA is assisting the reconciliation through its project on audits of contractor-prepared reconciliations. In responses to our survey, contractors also included overpayments and underpayments associated with contracts paid through vendor pay. For example, at least 28 contractors, including 3 major shipbuilding and ship repair contractors, reported overpayments and underpayments associated with vendor pay. These 3 major shipbuilding contractors, alone, reported overpayments of almost $9.4 million and underpayments of about $0.5 million. According to a response from 1 shipbuilding contractor, overpayments and underpayments that occur on a contract are not resolved until contract closeout, which could be several years later. Another vendor pay contractor reported about $12 million of underpayments primarily due to underpaid invoices. For contractors indicating receipt of vendor pay, we could not accurately eliminate the vendor pay discrepancies from the total amounts reported by the contractors because (1) not all contractors provided sufficient and consistent detailed information and (2) we did not validate all of the responses. Nevertheless, we asked DFAS Columbus to compare its records to the contractor responses for 106 contracts, and DFAS Columbus told us it did not make the payments on 42 of the contracts because they were vendor payments. Although our prior reports placed more emphasis on contract pay than vendor pay, recent DOD Office of Inspector General (IG) audits also have identified weak internal controls in vendor pay systems and overpayments to contractors. For example, in October 2001 and March 2002, the DOD IG reported that the Computerized Accounts Payable System, a vendor pay system used by DFAS Kansas City and DFAS Indianapolis, did not have proper internal controls to detect and prevent improper payments, and that DFAS had made at least $13 million of duplicate payments. As a result, recovery audits and recovery activities that are now required by 31 U.S.C. 3561 could be used to improve the management of contract payments associated with both “contract pay” and “vendor pay.” From fiscal years 1994 through 2001, DOD contractors have refunded over $6.7 billion to DFAS Columbus. However, as shown in table 2, DFAS Columbus records showed that contractor refunds dropped to about $488 million in fiscal year 2001 from $901 million in fiscal year 2000. Of the $488 million in refunds, DFAS Columbus collected $449 million by contractors submitting checks and the remaining $39 million through offsets. However, the level of refunds could be greater because contracting officers often collect funds from the contractors by offsets that are agreed to during contract negotiations, and DFAS might not be notified. Since DFAS keeps track of only those offsets it processes, its records would not have the number of refunds due to these other offsets handled directly by DCMA and the contractors. DFAS Columbus records showed that about $360 million, or 80 percent of the $449 million in checks sent to the government in fiscal year 2001, was due to DCMA contract administration actions and the remaining $89 million, or 20 percent, was due to payment problems. This is consistent with what we have previously reported. For example, in February 2001, we reported that 77 percent of $351 million in excess payments was primarily related to contract administration actions, and, in July 1999, we reported that about 78 percent of the contractor refunds was due to contract administration and other actions outside of DFAS’s control. In tracking contractor refunds due to payment problems, DFAS Columbus distinguishes between refunds with issued demand letters and refunds without demand letters—or unsolicited refunds. According to DFAS Columbus records, refunds due to payment problems were caused primarily by DFAS processing errors, including improper progress payment liquidations. We reviewed 12 unsolicited contractor refunds valued at about $24.2 million, or 83 percent of the total $29.2 million in unsolicited contractor refunds during the months of October 2000, March 2001, and August 2001. Our review revealed that 6 of the overpayments were due to progress payment liquidation errors and 6 were duplicate payments caused by contractor billing errors along with DFAS weaknesses in detecting and avoiding overpaying in these situations. Some examples follow. In September 2000, Textron Systems refunded $8,194,856 because of a DFAS error in liquidating progress payments. In August 2000, Textron billed a net amount of $2,739,301, based on contract delivery price of $10, 957,205, of which $8,217,904 should have been liquidated. However, DFAS paid $10,934,158 in August 2000. In July 2001, Boeing refunded $5,527,922 due to the incorrect liquidation of progress payments on two invoices. DFAS overpaid one invoice by $2,077,922 and another invoice by $3,450,000 because it used an incorrect liquidation rate. In March 2001, Northrup Grumman refunded $1,855,788 for a duplicate payment. DFAS first paid this amount in September 1999 for an invoice submitted in August 1999, and it paid the same amount again in January 2000. DFAS documentation did not indicate what allowed the invoice to be processed twice. The unsolicited refund was about 15 months after the overpayment. In August 2001, the Eagle Support Services Corporation refunded $958,339 because it had received two payments for the same invoice. The contractor submitted an electronic invoice on June 28, 2001, which DFAS paid on July 10, 2001. However, the contractor had earlier submitted a hard copy invoice on June 11, 2001, which DFAS paid on July 18, 2001. DFAS processed each invoice because the invoices had different shipment numbers. DOD contract management, payment, and accounting processes are complex and remain at risk of contractor debt arising from contract administration actions and overpayments caused by payment and billing errors. We reported in January 1998 that DOD’s contract management and payment process involves numerous organizations that share data using both manual and automated means that are not integrated. Since then, DOD has implemented more electronic sharing of contract information and invoicing to reduce manual processes. However, in December 2000 and April 2001, the DOD IG reported that additional controls were needed in electronic document sharing and interchange to ensure security of the data. In addition, DFAS personnel told us that contract and invoicing information still must be manually entered into MOCAS. For example, contracts with special payment instructions need to be processed manually because they are an exception to the MOCAS automatic payment process. One effect of the overall complex process is that DFAS Columbus can sometimes issue demand letters that it later cancels due to subsequent contractor, contracting officer, or DFAS actions to correct or otherwise resolve the basis for the debt. In fiscal year 2001, DFAS Columbus canceled $37.5 million of demand letters. A few examples follow. Lucent Technologies almost received a duplicate payment of $1,397,200 when the contractor submitted invoices to DFAS Omaha in October 2000 and DFAS Columbus in December 2000 for the same amount. DFAS Omaha, the originally designated contract payment office, paid the first invoice. Subsequently, DFAS Columbus became the payment office, and the contractor resubmitted the invoice to DFAS Columbus, which initiated payment. However, DFAS Columbus identified the duplicate payment, issued a demand letter, and then canceled the demand letter when it stopped the electronic funds transfer payment before it was processed by the contractor’s bank. In April 2001, DFAS issued a demand to Lockheed Martin for a $2,755,244 debt that occurred in September 1997 when a contracting officer issued a contract modification reducing the contract amount. An April 1998 audit of the contract identified that the wrong accounting line was cited in this modification, which created a debt. Even though the contracting officer was notified of the mistake at that time, no action was taken until DFAS issued the demand letter. In May 2001, a contracting officer prepared another contract modification to cite the correct accounting line, which eliminated the debt. DFAS then canceled the demand. A demand to Litton Systems Advanced, Inc. for a $2,124,022 overpayment was canceled when the contractor sent an invoice with the amount reduced for the overpayment. The entire contract management and payment process, which has been described in our prior reports, is further complicated by how progress payments are accounted for in contract records, a method that can ultimately create (1) differences in contractor and DFAS Columbus records and (2) future payment and contract reconciliation problems. In April 1997, we reported that contract payment problems can occur when DFAS Columbus liquidates progress payments because of payment allocations to an accounting classification reference number on a contract that have little relationship to the work performed. If payment allocations must be made using a different process than the MOCAS automated process, especially when contracts have special payment instructions, the risk of error increases. As mentioned earlier in this report, contractors responded that progress pay liquidation errors were a major reason for overpayments. This is consistent with our October 1995 report indicating that according to DFAS Columbus analysis the most frequent cause of an overpayment was the incorrect recovery of progress payments. DFAS Columbus data on fiscal year 2001 overpayments showed that progress payment liquidation problems were one cause of the overpayments, but that the primary reasons were contract modification and invoice processing errors. Even though contractors or contracting officers identify many overpayments, the DFAS Columbus contract reconciliation function is also likely to identify many overpayments. Of the 2,512 demand letters issued by DFAS Columbus in fiscal year 2001, its records showed that contract reconciliation had identified overpayments associated with 1,115 demand letters, valued at about $83.4 million. Its records also showed that the contractor or a contracting officer had initiated the remaining 1,397 demand letters, valued at about $43.8 million. The contract reconciliation process is intended to identify and correct imbalances between (1) the contractor’s invoiced amounts and recorded amounts paid in MOCAS, (2) contract obligation and disbursement balances in DFAS accounting records and MOCAS, and (3) the contract obligation amount and MOCAS obligation and disbursement amounts. The contract reconciliation function at DFAS Columbus deals not only with contracts that are candidates for closure but also those contracts needing partial audits to resolve prior payment problems and correct deficiencies so that DFAS Columbus can pay current invoices. As shown in table 3, at fiscal year-end, the number of contracts waiting for reconciliation had averaged about 2,300 for the past 7 years. However, during the year, the number of contracts going through reconciliation can be much greater. For example, in fiscal year 2001, DFAS Columbus processed over 8,100 contracts through reconciliation. As of September 30, 2001, DFAS Columbus had 3,249 contracts waiting for some level of reconciliation, a higher level than in the prior 6 years. Based on prior year results, contract reconciliation will likely identify millions of dollar of overpayments needing resolution. As shown in table 4, of the 3,249 contracts awaiting reconciliation, as of September 30, 2001, 575 had been in this status for over 360 days, while 1,523 of the contracts had been awaiting reconciliation for 90 days or less. This is fewer than what we reported in July 1999 when over 900 of 2,453 contracts had been in reconciliation for over a year. Included in the contracts awaiting reconciliation are contracts for some of the largest contractors included in our survey. For example, 3 of the largest DOD contractors that responded to our survey—Lockheed Martin, Raytheon, and Boeing—had 205, 191, and 119 contracts, respectively, waiting for reconciliation as of August 31, 2001. In addition, 1 of the large contractors that did not respond to our survey had 56 contracts in reconciliation at DFAS Columbus. As noted later in this report, DCAA is auditing contractor- prepared reconciliations at these large contractors, and DFAS uses the audit results in completing the contract closeout process. The error-prone nature of the contract management and payment process is illustrated by over 15,000 contracts in fiscal years 2000 and 2001 combined that have gone through some level of reconciliation at DFAS Columbus. Contract provisions often change, affecting deliveries, progress payment liquidation rates, and indirect cost rates, which creates excess payments after invoices have been paid. As we reported in January 1998, MOCAS records may differ from the accounting office records because contract information, such as modifications, may not have been properly and consistently processed by all locations. DFAS and contractors can process invoices in different sequences resulting in discrepancies in how progress payments are liquidated. As a result, many contracts can require some type of reconciliation more than once during their life. DFAS Columbus contract reconciliation results for fiscal year 2001 indicated that the majority of payment problems identified during research were due to payment system errors and erroneous processing of contract modifications. We also have previously reported that overpayments are not always recovered promptly. As mentioned in prior reports, such delays can cost the government in lost interest charges and use of the funds. According to DFAS Columbus records, even though it had collected almost $85.7 million in fiscal year 2001, problems remain with the timely recovery of overpayments. For example, of the 546 overpayments, including voluntary contractor refunds, for which DFAS Columbus had recorded in its records both the check receipt date and the date the overpayment was identified, only 75 were refunded within 30 days, with 294 being refunded after 90 days. Moreover, 360 additional refunds did not have the dates necessary to determine the timeliness. Further, as of August 2001, DFAS Columbus accounts receivable records showed that about $26 million out of $54.8 million total receivables, or 47 percent, had been outstanding for over 90 days. Of those outstanding for over 90 days, about $18.8 million had not been resolved for over a year because contractors had disputed these receivables—also discussed later in this report. We reviewed 11 files for these old accounts receivable and some examples follow. In November 1997, DFAS Columbus issued a demand letter to McDonnell Douglas for $2,294,454. The contractor disputed the demand and, in March 1998, the contract was submitted to audit. As a result of the audit, the demand was reduced to $27,904. However, DFAS did not issue a demand for the reduced amount. In May 2000, DFAS decided to update the audit on the contract. As of December 2001, over a year after the second audit, the recorded receivable was still $2,294,454 because the DFAS Columbus Accounts Receivable Branch had not been informed of the audit results. In August 1996, DFAS Columbus issued a demand letter to Hughes Aircraft for $2,032,113. DFAS subsequently issued a second demand letter in September 1996. The contractor disputed the demand but did not provide sufficient detail to support the dispute so that an audit could resolve the dispute. In January 1997, DFAS Columbus asked the contractor for additional information, and the contractor responded that the problem might be with several different progress payment and liquidation rates on the contract. As of December 2001, the account receivable file showed that no action had been taken. In September 1999, DFAS Columbus issued a demand letter for $3,886,567 to Raytheon because of a duplicate payment that had occurred in 1995. The contractor disputed the demand and provided evidence that it had sent a check for $3,137,200 for the debt in August 1995. However, according to DFAS Columbus, this left a balance due of $749,369. In November 1999, DFAS issued a demand letter for this new balance due, and Raytheon disputed the remaining amount of the debt. As of December 2001, the $749,369 demand had not been resolved. After we shared our concerns about aged disputes with DCAA, it attempted to resolve these receivables as part of its recovery audit effort. As of this report’s date, DCAA is still in the process of examining contractor data to determine the validity of the debt. DFAS inattention to resolving these disputed receivables and determining their validity could result in potentially overstated accounts receivable balances. Further, contract closeout problems, which we reported on in July 2001, can be exacerbated when payment discrepancies are not promptly resolved. We reported that DFAS had made $615 million of illegal and improper adjustments to closed appropriation accounts, and these adjustments included $9.9 million of overpayments that had been improperly redistributed to open appropriation accounts after the original accounts were closed. In response to our prior recommendations, DOD has taken both short-term actions and established long-term initiatives to address long-standing contract payment problems that result in overpayments and underpayments. Some of the short-term actions appear to be having some positive results. However, the success of the longer-term initiatives, which are more critical to resolving underlying problems in the contract payment systems and processes, is uncertain. In the short term, DFAS, DCMA, and DCAA, in coordination with the DOD Comptroller, have initiatives to (1) audit major DOD contractors to identify overpayments and evaluate their internal controls for identifying and reporting overpayments and (2) reconcile and close out old contracts. Specifically, in August 2001, DCAA agreed to assist DFAS in addressing excess payments to contractors, debt collection, invoice payment instructions, contract reconciliation, and professional development. Subsequently, in November 2001, DCAA began audits on overpayments and related contractor internal controls for identifying and promptly reporting overpayments. Its plan is to complete audits at 190 contractors in fiscal year 2002. As of March 2002, the number of contractors to be audited had increased to 195. As part of this effort, DCAA is also examining some of the aged (over 90 days old) accounts receivable in dispute according to DFAS Columbus records. As of January 2002, DCAA had completed audits of overpayments at 46 contractors and identified over $22.7 million of overpayments. From additional special audits of progress payments, DCAA identified about another $27.3 million of overpayments, most of which were promptly resolved. These results illustrate what can be accomplished from programs to identify payment errors and recover erroneous payments. DCAA officials stated that they would continue overpayment audits until contractors’ controls ensure prompt identification and reporting of overpayments to DOD. While DCAA efforts could be viewed as a type of recovery activity, DOD does not have an overall recovery audit program that includes all types of contract payments. DCAA also has been involved with the contract reconciliation of old contracts at large contractors to facilitate contract closeout and the transition to the new systems intended to replace MOCAS. DCMA, in response to DOD overpayment issues, has issued additional guidance to its contracting officers. For example, in April 2000 and November 2001, it issued memorandums on the identification and recovery of overpayments and excess progress payments, respectively. These memorandums emphasized the need for contracting officers to be alert to situations in which contractor debt is incurred and to immediately issue demand letters. Further, in a response to our February 2001 recommendation, DCMA stated that it is using DFAS Columbus information on refunds to identify and evaluate possible systemic contract administration problem areas. DFAS Columbus, in response to prior reported deficiencies, has implemented practices to provide better oversight of the reasons for overpayments and the results from contract reconciliation. For example, it established a central-tracking database to categorize reasons for contractor refunds. This database identifies different types of possible DFAS Columbus errors and the types of contract administration actions creating the contractor debt. In addition, DFAS Columbus has been tracking contract reconciliation results to identify reasons for the payment problems identified during reconciliation. Although DFAS Columbus personnel easily provided us summary and detailed information from these databases, the information had not been summarized and incorporated into performance management indicators. An additional DFAS Columbus effort, initiated in May 2000, is to identify and monitor the potential for duplicate payments. Monthly reports, including one on causes of duplicate payments, are prepared for each of the three entitlement sections at DFAS Columbus and summarized for management review. According to the duplicate payment report for the first quarter of fiscal year 2002, DFAS Columbus had avoided making about $181.7 million of duplicate payments that had been identified as a result of the new procedures. DFAS Columbus has also issued several operating procedures to improve payment processing. The success of DOD’s implementation of new systems to address the root causes of contract administration and payment problems is uncertain. DOD is planning on full implementation of the Standard Procurement System (SPS) and the Defense Procurement Payment System (DPPS) to improve contract management and payment processes by replacing MOCAS, which has been used since 1968. SPS is intended to be DOD’s single, standard system to support contracting functions and interfacing with financial management functions, such as payment processing. However, in July 2001, we reported that (1) full implementation of SPS has been delayed 3-1/2 years, (2) DOD had not justified the continued investment in SPS, (3) SPS modules to manage the large weapons systems procurements had not yet been implemented, and (4) the DOD IG had found that the system lacked critical functionality and users were generally dissatisfied with the system. DPPS is intended to be DOD’s standard contract payment system. However, DPPS implementation at DFAS Columbus has been delayed over 2 years—from August 2001 to October 2003. According to DPPS program officials, the implementation of DPPS is not dependent on the full implementation of SPS. Nevertheless, the DOD IG concluded in its September 2001 report that DPPS would not fully eliminate disbursement and contract accounting problems because DFAS, in order to comply with special payment instructions, will need to continue making manual payments for which there is a greater risk of errors being made. Despite DOD’s initiatives and accomplishments in addressing overpayment problems, DOD does not yet have fundamental control over contractor debt and underpayments because its procedures and practices do not fully meet federal accounting standards, federal financial system requirements, and its own accounting policy. Further, DOD managers do not have the performance measures needed to assess compliance with policies and procedures and the extent of overpayment and underpayment problems at a point in time. Specific examples follow. Federal accounting standards, federal financial system requirements, and DOD’s accounting policy call for the prompt and accurate recording of accounts receivable upon completion of the event that entitles collection of the amount. These standards and requirements specify that an amount be estimated if the exact amount is unknown at the time the claim is established. However, contractor debt arising from contract administration actions usually is not recorded as an accounts receivable when the debt is established in the contract or its modification—the event that entitles DOD to collect an amount from the contractor. Instead, the DFAS Columbus Accounts Receivable Branch records an account receivable after it has been notified of the debt and it or DCMA has issued a demand letter. Contractors often refund amounts from contract administration actions, as well as from payment and billing errors, without DCMA or DFAS Columbus issuing demand letters. Together, these factors would tend to mask the extent of overpayment problems and result in overpayments not being known or promptly addressed. Federal financial system requirements and DOD accounting policy also stipulate that accounts receivable are to be aged for monitoring and controlling collection. DOD accounting policy specifies that accounts receivable are to be aged into 10 groups, ranging from 1 to 30 days delinquent to over 3 years delinquent. Even though DFAS Columbus tracks the number of accounts receivable in dispute, its Accounts Receivable Branch, which collects overpayments and maintains the detailed accounting records to support collection activities, does not routinely age accounts receivable according to DOD accounting policy to help manage and monitor the timely resolution of the accounts. DOD policies require that a receivable be recorded within the same month discovered in order to be recorded timely and that contractor debts be promptly collected. However, DOD does not have adequate performance measures on the timeliness of collection efforts for contractor debt once it has been identified. Federal accounting standards also specify that accounts payable and other liabilities will be recorded when the potential for the liability is first recognized. Such liabilities include estimates of work completed for facilities and equipment or requests for progress payments for items constructed or manufactured to contract specifications. DFAS Columbus does not maintain detailed records on underpayment liabilities or accounts payable after the contractors notify it of the underpayments, even though the underpayments could be for millions of dollars and take months to resolve. As a result, DOD managers do not have appropriate management control of accounts receivable and accounts payable and other liabilities stemming from its contract administration and payment processes. Aside from the high risk that these accounting records could be misstated at the end of an accounting period, failure to instill these disciplines magnifies the level of effort required to later identify and collect these amounts. DFAS Columbus does not usually record an accounts receivable when the debt is first created by a contract administration action. Instead, it only tracks the checks from contractors when they are received and the offsets of contractor invoices after they have been completed. DOD policies and procedures require the contracting officer to issue a demand letter to a contractor when an overpayment is identified and to send a copy of the demand letter to DFAS Columbus where it is recorded as an accounts receivable. When contract administration actions are implemented through contract modifications or other signed agreements, these documents recognize that a repayment will result from the action and the amount owed is identified or can be readily calculated. However, DCMA and DFAS do not always issue demand letters for contractor debt based on contract administration actions because the debt could be resolved promptly and issuing a demand letter would involve additional effort. While federal accounting standards require that such events be recognized as accounts receivable, DFAS cannot recognize the receivable unless it is promptly notified of contractor debt. In fiscal year 2001, DFAS Columbus records showed that contractors refunded about $360 million due to contract administration actions, but the Accounts Receivable Branch only recorded about $127 million of accounts receivable based on issued demand letters that included duplicate payments and other payment errors. This comparison shows that established procedures are not always followed. Additionally, we reported in February 2001 that contracting officers do not consistently issue demands for payment when contract changes are negotiated even though the amount usually is known or can be calculated. As a result, significant amounts of contactor debt arising from contract administration actions are not being recorded when they first occur as accounts receivable for financial management control and reporting. A further result, discussed later, is that DOD cannot fully measure the timeliness of contractor debt collection so that problems can be identified and effective solutions implemented prior to final contract reconciliation. Effective February 19, 2002, the FAR was amended to add a paragraph to the recommended “prompt payment” clauses of contracts requiring the contractor to notify the contracting officer if it becomes aware of a duplicate payment or an overpayment on an invoice. By its terms, however, this requirement does not apply to overpayments due to errors in financing payments or contract administration actions. With few exceptions, the contracting officer should include this clause in contracts. A similar revision to the FAR contract financing clauses would ensure that all contractor debt would be promptly identified, reported, and collected. The Office of Federal Procurement Policy is currently reviewing for approval and publication revisions to the FAR financing clauses that will require contractors to notify the government if they become aware of overpayments arising from financing payments. Adequate implementation of these procedures along with prompt contracting officer action could facilitate the proper and complete recording of accounts receivable by DFAS. DFAS Columbus is a primary DFAS activity involved in collecting contractor debt and maintaining the detailed accounting records to support collection activities. However, its Accounts Receivable Branch did not routinely age its accounts receivable records, stemming from DOD’s contract management and payment process, as required by DOD’s accounting policy. The DFAS “Concept of Operations for Recording and Reporting Receivables Due From the Public” effectively exempts DFAS Columbus from this requirement because all receivables that are not collected in 90 days are supposed to be sent to DMO. DFAS Columbus procedures also specify that any debt in excess of $600 that has not been resolved after two demand letters be transferred to DMO. However, DFAS Columbus personnel told us that when a contractor disputes the first or second demand letter the debt would not normally be transferred to DMO because the validity of the debt has not been established. Even though DFAS Columbus maintained information on the number of accounts receivable in dispute each month, the disputed amounts were not routinely aged. As we discussed earlier, about 47 percent of the receivables were over 90 days old. As of August 2001, over $23.5 million of the $35.1 million of demand letters in dispute had been outstanding for months, even years, and, therefore, had not been transferred to DMO. As a result of the policy exemption and DFAS Columbus not aging its accounts receivable, DFAS did not have proper control over receivables due from contractors. In fiscal year 2001, DFAS Columbus issued demand letters for about $127 million. However, as shown in table 5, the amounts in dispute averaged about $33.8 million, or almost 60 percent of the total monthly receivables, during the year. In May 2001 and June 2001, the DOD IG and we reported that even when debts were transferred to DMO, collection of the debts was not being effectively and proactively pursued. In its comments to our report, DOD indicated that it took immediate actions to monitor and improve debt collection in DMO. However, we did not assess the status and effectiveness of these actions for this report—we limited our review to those accounts receivable that had not been transferred to DMO as of August 2001. DOD also does not have adequate performance measures to monitor how timely contractor debt is collected once it is identified. For example, DCMA does not include the timeliness of contracting officers’ actions on overpayments as a key performance measure, and DCMA officials stated that statistics on demand letters issued by contracting officers are not kept. FAR and DCMA’s procedures require the contracting officer to take prompt action on contractor debt. However, the contracting officer might not always take immediate action to collect the overpayment due a large workload or oversight. For example, during its follow-up at 1 major contractor, DCAA found that the contractor had reported to its contracting officer a $59,346 overpayment in June 1998 and a $144,504 overpayment in March 2001. The contractor assumed that these amounts would be collected by an offset, but DOD did not take actions to collect these amounts until after our survey. In January 2002, the $144,504 was collected through offsets and a refund check. In December 2001, the $59,346 had only been partially collect through offset and $21,913 was still outstanding at the time of DCAA’s review in February 2002. In addition, the DFAS Columbus refund records lack key data, such as dates of contract modifications or credit memorandums, needed to measure the timeliness of collection for most refunds of contractor debt arising from contract administration actions. Our review of the documentation for 15 contract administration refunds, valued at $62.3 million, found that (1) 5 of them did not have enough information to determine whether the funds were collected timely, (2) 2 were collected within 30 days from the time an amount due the government was identified, and (3) 8 were collected in over 30 days—1 in over 90 days and another in over a year. Even for the offsets that DFAS Columbus processes, it tracks them separately from the other refunds, does not measure the timeliness of collection, or does not always identify the reason for the overpayment. Our review of 15 offsets from January, February, and June 2001 found that 12 of them took over 30 days to fully collect the debt and reasons for the debt were not always identified. Two examples follow. On February 21, 2001, United Technologies agreed to DFAS Columbus taking offsets to collect $8,607,568 of overpayments. Three offsets were taken to collect the full amount—the initial offset occurred on February 24, 2001, and the third offset occurred on March 23, 2001. The recording of these offsets in the contract accounting records was completed on April 6, 2001, or 44 days later. DFAS Columbus records also did not indicate a reason for the overpayment. In November 1999, Viasat, Inc. first contacted DCMA about $2,605,889 it owed the government due to a change in contract terms. The contractor contacted DCMA again in October 2000 to discuss an offset. DCMA provided DFAS Columbus a credit voucher on November 13, 2000, and DFAS processed the voucher on December 18, 2000. However, DFAS Columbus did not complete the offset until January 29, 2001—97 days after the contractor expressed interest in an offset. DFAS Columbus records did not indicate a reason for the time involved to complete the offset. In its contractor overpayment audits, DCAA also is examining contractor records on refunds and offsets. According to DCAA, preliminary results from its contractor audits indicate that (1) not all amounts due the government are promptly returned and (2) contractor controls over offsets are weak because the contractors lack supporting documentation and DFAS is not always notified of offsets that occur. As a result, contractor and DFAS accounting records may not agree for certain contracts, and DOD lacks complete information on the amount of refunds being collected. DOD also is unable to fully measure and monitor underpayments and the timeliness of their resolution. Even though DFAS Columbus maintains data on unpaid contractor invoices, it could not provide us any other information on contractor underpayments. According to DFAS Columbus officials, they do not maintain detailed records on underpayments because contractors usually notify them promptly about underpayments, which is not always the case with overpayments. Yet, the contractors that responded to our survey indicated that they had over $176 million of underpayments in their records at the end of fiscal year 2001. According to federal accounting standards and financial system requirements, such information on underpayments should be routinely recorded and maintained in accounts payable or other liability records. In February 2001, we reported that for the 39 contractors we reviewed, all of the fiscal year 1999 unresolved underpayments had been unresolved for over 180 days. Without complete detailed accounting records on payables, DFAS Columbus is unable to adequately monitor the resolution of underpayments, and the amount of payables reported by DFAS Columbus could be understated at the end of an accounting period. Further, any problems with unresolved underpayments could hinder prompt resolution of overpayments. Survey results, DCAA audits, and DFAS Columbus data indicated that contractor debts resulting from contract administration actions as well as payment problems exist for all DOD contract payments—contract pay as well as vendor pay. Further, resolution of contractor debt has not always been timely. DOD has taken short-term actions, including increased contract audits focusing on overpayments that are achieving immediate results. However, its longer-term initiatives rely on new automated systems that have not been implemented to address many of the existing problems in DOD’s contract management and payment processes. The success of these systems is uncertain because of the problems in functionality, costs, and significant delays that the DOD IG and we have reported on. Until these contract management and payment processes are improved, DOD’s risk of contractor overpayments, regardless of cause, and underpayments will continue. Accordingly, development and implementation of an internal control to identify and recover contractor debt, which is now required by 31 U.S.C. 3561, will be important. DOD could have better management control of all types of contractor debt, regardless of cause, and underpayments by establishing procedures calling for (1) accounts receivable and liabilities to be recorded in accounting records according to federal accounting standards and federal financial system requirements and (2) mechanisms to hold staff and management accountable for doing so. The accounting records could then provide more comprehensive performance measures for DOD managers to monitor the timely collection of all amounts due the government and resolution of underpayments. Such measures could, in turn, result in more effective financial management and help reduce contract reconciliation and closeout problems in the future. We recommend that the Under Secretary of Defense Comptroller in coordination with the under Secretary of Defense for Acquisition, Technology, and Logistics implement an internal control, as now required by 31 U.S.C. 3561, to identify and collect overpayments to contractors, regardless of contract payment type; reevaluate DCMA, DOD Comptroller, and DFAS established procedures and controls and revise them as needed to ensure prompt recognition and recording of receivables and potential liabilities stemming from DOD’s contract management and payment processes, according to federal accounting standards and financial management system requirements; and develop and maintain more comprehensive and complete records on contactor refunds and underpayments to better measure and monitor (1) the timeliness of all collections, including checks and offsets, (2) the resolution of demand letter disputes, (3) the causes of overpayments and underpayments, (4) the resolution of overpayments and underpayments and the need for corrective actions, and (5) compliance with policies and procedures. DOD provided written comments on a draft of this report, which are reprinted in appendix V. DOD concurred with the report’s recommendations. In its comments, DOD emphasized that it supports the requirement for an audit recovery program, was taking actions consistent with such a program before the enactment of 31 U.S.C. 3561, and will comply with future Office of Management and Budget guidance. In addition, DOD stated that (1) DFAS has implemented several additional internal controls in its contractor entitlement and payment process to identify erroneous payments for recovery and minimize future erroneous payments, (2) new MOCAS procedures are being developed and implemented to reduce progress payment liquidation differences, and (3) its contract audit resources are being better used to detect overpayments. DOD also stated that its financial management guidance on recording accounts receivable and payable and account aging is consistent with the federal accounting standards, but to improve compliance requires consideration of the existing processes, factors establishing claims or obligations, and ways to manage efficiently and cost effectively. Further, DOD stated that to improve compliance, (1) DOD is developing new performance indicators on accounts receivable, overpayments, and aging, (2) DFAS is reviewing and revising its procedures in this area, and (3) DFAS Columbus plans to modify its accounts receivable processes to measure the timeliness of collections for amounts due and is working with contractors and DCAA to reduce the number and age of disputes. DOD also provided detailed technical comments that we incorporated into the report as appropriate. As you requested, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the issue date. At that time, we will send copies of this report to interested congressional committees. We also will send copies of this report to the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense Comptroller; the Director of the Defense Finance and Accounting Service; and the Director of the Office of Management and Budget. We will make copies available to others upon request, and the report will also be available on GAO’s home page at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-9505 or David Childress at (202) 512-4639. Key contributors to this report are Jean Lee, Harold Reich, Alan Steiner, and Gary Wiggins. To determine the number and value of overpayments and underpayments in selected DOD contractors’ records at end of fiscal year 2001, we surveyed 183 DOD contractors. Appendix II has a complete list of these contractors. We identified the top 100 DOD contractors and the top 100 small business DOD contractors based on total dollar value of fiscal year 2000 contract actions recorded in the Federal Procurement Data System (FPDS). After identifying the contractors, we used DOD’s Central Contract Registry (CCR) to identify contractor business units, their addresses, and points of contact. Contractors must be in CCR to receive payments, and contractors are responsible for maintaining the data in the registry. About 54 percent of the large DOD contractors had multiple business units. For example, Lockheed Martin, Raytheon, and Boeing had 114, 100, and 61 business units, respectively. Accordingly, we contacted officials at many of the contractors for help identifying the appropriate addresses to which the questionnaires should be sent so that the appropriate finance units would receive them. The appropriate finance units for our survey were those that handle the billings and maintain accounts receivables; some business units for which we had addresses do not maintain these records. We did not send surveys to all of the 200 contractors as listed in FPDS because 1 of them was classified, 7 of the small businesses were also included in the top 100 contractors, 3 contractors’ names could not be matched to those in CCR, and 5 contractors listed in FPDS had merged with other contractors—for example, Boeing had merged with McDonnell Douglas— or were business units of major contractors already listed. We subsequently mailed 497 surveys to business units of 183 DOD contractors requesting the total overpayments and underpayments in their records as of September 30, 2001, and specific detail on individual overpayments and underpayments equal to or over $1,000. The survey is similar to the one we used in our 1995 report. A copy of our final survey is in appendix III. We either telephoned or sent follow-up reminders to business units. In response to our mailings and follow-up, we received 249 survey responses from business units of 58 large contractors and 62 small business contractors. However, some of the responses we received did not include all of the business units from the contractor, and others indicated that another business unit would have the information. Twenty-two mailings were returned as undeliverable. The 249 business units that responded to our survey reported on overpayments and underpayments associated with over 600 contracts. The findings from our survey apply only to the contractor business units that responded and cannot be projected to any others. We coordinated with DCAA, as part of its ongoing recovery audits, for its auditors to test the validity of the contractors’ responses on 27 business units in our survey. These units reported about $58 million, or 94 percent, of the total reported overpayments, and about $126 million, or 71 percent, of the total reported underpayments. We and DCAA selected the business units to visit based on a combination of factors, including geographic location, size of the contractor, the amount of reported payment discrepancies, and the nonreporting of payment discrepancies with significant annual billings. We also visited DCAA offices to discuss the agency’s audit results in verifying overpayments and underpayments reported for 9 of the 27 contractor locations. To determine the status of DOD corrective actions, we obtained and reviewed fiscal year 2000 and 2001 data collected by DFAS Columbus on contractor refunds, offsets, and reasons for the refunds. In addition, we obtained data on the number of contracts in reconciliation at the end of fiscal year 2001, the number of contracts that had been partially or completely reconciled in fiscal years 2000 and 2001, and the reasons for payment problems that DFAS Columbus had identified during reconciliation. We did not verify the data provided. We also obtained and evaluated fiscal year 2001 accounts receivable balances, which represented demand letters issued by DFAS Columbus. To identify some of the conditions and reasons for overpayments, we selected 58 of the highest dollar transactions from the 3 months in fiscal year 2001 with the highest balances for accounts receivable cancellations, transfers, collections made through offsets, and unsolicited refunds. The transactions selected represented about 80 percent of the total dollar value of the group for that month. In addition, we obtained a detailed list of all accounts receivable over 90 days old as of August 2001 and reviewed the highest dollar value transaction in each year from 1996 through 2001. To further assess the timeliness of collections, we analyzed data in DFAS Columbus records on refunds due to payments errors and attempted to determine the collection time for 15 contract administration action refunds in fiscal year 2001. Further, we obtained information on performance measures, certain DFAS Columbus procedural changes, and the planned implementation of DPPS. We also discussed payment problems and corrective actions with DFAS Columbus and DCMA officials. We reviewed prior GAO and DOD IG reports on contract payment problems, applicable laws and FAR sections, federal accounting standards and financial system requirements, the DOD Financial Management Regulation, DCMA policies and procedures contained in its “One Book,” and DFAS Columbus procedures regarding accounts receivable, contractor refunds, and contract reconciliation. We requested comments on a draft of this report from the Secretary of Defense or his designee. We received comments from the Under Secretary of Defense Comptroller and have reprinted these comments in appendix V. We conducted our review from August 2001 through April 2002 in accordance with U.S. generally accepted government auditing standards. Contractor names and fiscal year 2000 contract actions are from the Federal Procurement Data System. Dollars of overpayments and underpayments are from survey responses received before February 2002. Fiscal year 2000 contract actions (dollars in thousands) 452,513 Fiscal year 2000 contract actions (dollars in thousands) 214,447 Fiscal year 2000 contract actions (dollars in thousands) 77,744 Fiscal year 2000 contract actions (dollars in thousands) 41,002 Fiscal year 2000 contract actions (dollars in thousands) 27,913 Fiscal year 2000 contract actions (dollars in thousands) Units that reported these amounts are now General Dynamics Decision Systems, Inc. Responses indicated that these companies no longer have DOD contracts. Surveys were returned due to invalid addresses. Fax to: Harold D. Reich Or Get Email format from (213) 830-1180 (reichh@gao.gov) Any questions should be directed to Harold Reich at (213) 830-1078 (reichh@gao.gov) or Eric Johns at (213) 830-1154 (johnse@ gao.gov) in our Los Angeles Field Office or Al Steiner in our Washington, D.C. office at (202) 512-9332 (steinera@gao.gov). 1. We are requesting information on the single business unit at the address listed below, unless you find an Attachment 3 in your survey. If you have an Attachment 3 your response should be an aggregate response for all listed and confirmed business units, or any units you might add, for which you handle accounts receivable and payable. (Please make any corrections to address in this space) Please provide the name and telephone number of a contact in the accounting and financial unit for us to contact if additional information is needed: NAME OF CONTACT: EMAIL ADDRESS: . PHONE NUMBER: (______) ______ - ____________ If your business unit is a subsidiary of a parent company, please list the name, DUNS number, and address of the parent business. PARENT COMPANY:__________________________________________ DUNS: ______________________________ 4. Does your business unit prepare contract billings and maintain accounts receivable for contracts with the DOD? [NOTE: If we have called you to make other arrangements, please proceed directly to question 5 as we already have the information for this question.] Yes (GO TO QUESTION 5) No (READ INSTRUCTION IN BOX BELOW.) STOP – If you answered “No” to question 4 above, please do not continue but return this request in the enclosed envelope after supplying the information requested in this box. Please do not forward to the business unit that does your billings and maintains your accounts receivable. Instead, please provide the name and address for the business unit that maintains your accounts receivables so we can verify that the unit has been included in the initial mailing of this data request. (List below the name and address of the business unit that maintains your accounts receivable: CONTACT PERSON: PHONE: 5. From your accounts receivable and payable ledgers, or other appropriate accounting record, provide total gross dollar amounts of existing overpayments and underpayments (billed amounts versus payments received) for all DOD contracts as of September 30, 2001 (or if September 30 is impracticable, please specify the “as of “ date you have used). Overpayments should exclude advance payments. The total underpayments extracted should exclude 1) billings less than 30-days old on the "as of" date for the information provided (this excludes billings for which payments are not delinquent), and 2) claims, accounts under dispute, and unbilled receivables on long-term contracts. However, we expect overpayments or underpayments caused by incorrect liquidation of progress payments to be included in the summary totals provided in response to this question. Gross Overpayment Dollar Amount: _______________________________ Gross Underpayment Dollar Amount: _____________________________ As of date used: _____________________ MM / DD / YYYY 6. For each overpayment exceeding $1,000, please provide the following information on the Schedule for Contract Overpayments (Attachment I): contract number, the amount of the overpayment, the date the overpayment was identified: action you have taken (i.e., notified DCMA and/or DFAS, adjustments by check, offset, or no action); indicate whether you received a demand letter for the overpayment, and the potential cause for each payment. If you are unable to provide this detail contract information, please briefly explain why it is not available: 7. For each underpayment exceeding $1,000, please provide the following information on the Schedule For Contract Underpayments (Attachment II): contract number, the amount of underpayment, the date the underpayment was identified; notification and the action you have taken; and the potential cause for the underpayment. If you are unable to provide this detail contract information, please briefly explain why it is not available: 8. List the most recent annual dollar amount of gross contract billings to DOD by this business unit. GROSS DOLLAR AMOUNT OF BILLINGS TO DOD: $____________________________ 9.Please include any additional information or specific comments and issues concerning your DOD contract payment experiences that you believe we should consider. Please retain any work sheets or records used to prepare this response. Thank you for your prompt response. The information requested regarding the business unit reporting is intended to identify the unit providing the information. Please make any appropriate changes to the name and address of your business unit (Question 1). Please provide the name, telephone number, and email address of a contact person (Q2) who could best respond to our technical questions about the information you provide. Also identify the parent company of this business unit (Q3) providing the name, DUNS number, and mailing address. Q4. The accounting and finance unit that maintains the accounts receivable and related billing accounts for contracts with DOD should respond to this data request. We intended to send this data request to the business units that are listed in the CCR and could maintain accounts receivable and billing records. If your unit does not maintain these records, please provide the name, address of the business unit and a contact with phone number who maintains your accounts receivable for DOD contracts, but do not forward this request to that unit. Q5. From your accounts payable and receivable ledgers or other appropriate accounts that include information by individual billing numbers (invoices and vouchers), please provide the total gross dollar amounts of overpayments and underpayments (including unpaid bills) for all DOD contracts billed directly to the government as of September 30, 2001. If the information is not available as of this date, provide the information for a date as close as practicable and specify the date used. The total underpayments extracted should exclude billings less than 30-days old on the "as of" date for the information provided (this excludes billings for which payments are not delinquent). Please ensure that you do not consider (1) advance payments as overpayments and (2) claims, amounts under dispute, and unbilled receivables on long-term contracts as underpayments. The amounts listed in response to question 5 should provide a point-in-time measurement of total underpayments and overpayments as shown by contractor records and include the effects of incorrectly liquidated progress payments. We recognize that the payment status for DOD contracts will change after the "as of" date. Q6. Following the instructions below, please complete the Schedule for Contract Overpayments(Attachment I). Enter the contract number, amount of the overpayment, and date on which the overpayment was identified. Provide this information for individual contract overpayments exceeding $1,000. Do not include advance payments. Round dollar amounts to the nearest dollar. If you have notified DOD of the overpayment, specify which DCMA and/or DFAS office you contacted and the date you notified them. Check if you have made payment by check, made plans to offset against another payment, or if no action has been taken. If a demand letter was received from DOD for the overpayment, please indicate the date of the letter and which DCMA or DFAS office sent it. For the potential cause, please provide a brief description—i.e., contract modification-scope change, contract modification-price adjustment, contract modification-other, duplicate payment, recording error, etc. Q7.Following the instructions below, please complete the Schedule for Contract Underpayments(Attachment II). Enter the information indicated for individual contract underpayments exceeding $1,000. Round dollar amounts to the nearest dollar. If you have notified DOD of the underpayments, specify which DCMA and/or DFAS office you contacted and the date you notified them. Or if no action taken, check column f. For the potential cause, please provide a brief description—i.e., progress payment calculation error, prices not updated, recording error, lost inventories, contract modification not processed, etc. Q8. State the volume of business your business unit has with DOD as measured by total gross billings to DOD for your most recent year ending date. Please include the ending date (month and year) for the total billings listed. Business Units Handle By Your Accounts Receivable and Payable Office INSTRUCTIONS: Through review of the Central Contract Registry (CCR), or by prior contact with your company, we have indications that your business unit maintains the accounts receivable and payable for each of the units listed below. Please confirm that you do maintain accounts for each of these business units by 1) crossing out any units not maintained, and 2) adding the name and address of any additional units for which you maintain these accounts. Your survey response should be an aggregate response for all the confirmed business units. DCAA auditors found that 17 contractor locations did not have any differences or the differences were immaterial based on their review of contractors’ records and comparison to their survey responses. Shown below are 10 contractor locations with material differences identified by DCAA and the primary reasons for the differences. In addition, DCAA auditors visited 1 contractor—SAIC—that did not respond to the survey, and they found $259,194 of overpayments and $2,002,243 of underpayments in the contractor’s records. differences Primary reasons for differences ($319,025) Overbilled costs were not reported ($997,141) Unpaid invoices not reported (6,946,813) Unanalyzed overpayments were (7,672,885) Unanalyzed underpayments were not reported; unpaid invoices not reported (30,432,643) Unanalyzed overpayments were (16,554,214) Unanalyzed underpayments not reported; outstanding payment variances and contract administration adjustments not reported (37,833) Not identified (876,695) Unpaid invoices not reported— only reported invoices with partial payments (12,688,525) Contractor failed to report (83,694) Not identified overpayments; inappropriate accounting for labor and material transfers (4,028,327) Overpayments over 1 year old (22,612,605) Underpayments over 1 year old were not reported; unpaid invoices not reported—only reported invoices with partial payments (75,103) Not identified (631,577) Underpayments on certain contracts were not reported (2,666,281) Overpayments on certain contracts were not reported; overpayments and underpayments netted for reporting (10,386,620) Underpayments on certain contracts were not reported ($57,185,791) ($58,040,558) The contractor is now Volvo Aero Services and no longer has DOD contracts. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
Since GAO reported on Department of Defense (DOD) contractor overpayments in 1994, additional reports have been issued highlighting billions of dollars of overpayments to Defense contractors. In December 2001, Congress amended Title 31 of the United States Code to require a federal agency with contracts totaling over $500 million in a fiscal year to have a cost-effective program for identifying payment errors and for recovering amounts erroneously paid to contractors. DOD contractors' responses to GAO's survey indicate that they have millions of dollars of overpayments on their records and that they are continuing to refund overpayments-- about $488 million in fiscal year 2001. DOD has taken actions to address problems with contractor overpayments. In addition to its contract audit functions and as part of a broad based program to assist the Defense Contract Management Agency (DCMA) and the Defense Finance and Accounting Service (DFAS), the Defense Contract Audit Agency (DCAA) is auditing at least 190 large DOD contractors to identify overpayments and ensure that contractors have adequate internal controls for prompt identification and reporting of overpayments. Although DOD has several initiatives to reduce overpayments, it still does not yet have basic administrative control over contractor debt and underpayments because its procedures and practices do not fully meet federal accounting standards and federal financial system requirements for the recording of accounts receivable and liabilities. As a result, DOD managers do not have important information for effective financial management, such as ensuring that contractor debt is promptly collected.
FMCSA was established within the Department of Transportation (DOT) in January 2000, and is tasked with promoting safe commercial motor vehicle operations and reducing large truck and bus crashes, injuries, and fatalities. It seeks to achieve this reduction through regulation, enforcement, and partnerships with stakeholders, among other activities, and with full accountability to the public through transparency, results- oriented performance measuring, and managing for results. Since fiscal year 2010, FMCSA’s total budget authority to conduct these activities has remained relatively stable, increasing about 6.5 percent from fiscal year 2010 through fiscal year 2016 (see table 1). Funding for implementing and applying interventions is included in both the Motor Carrier Safety Operations and Programs and Safety Grants budget authorities. The vast majority of FMCSA’s staff are located in field offices, including divisions and service centers. Field staff are primarily responsible for implementing FMCSA’s compliance and enforcement activities, including investigations. Federal Safety Investigators represent the majority of FMCSA’s compliance and enforcement program staff (see table 2). In addition, FMCSA partners with state agencies to perform some intervention activities, such as conducting carrier investigations; however, FMCSA is responsible for ensuring that commercial motor carriers under its authority comply with federal safety regulations. FMCSA’s Office of Enforcement is the primary office responsible for developing policy for FMCSA’s compliance and enforcement program, and overseeing the implementation of intervention activities. The CSA program is intended to improve the effectiveness of FMCSA’s compliance and enforcement programs, while more efficiently using its resources to reach carriers that pose the highest safety risk. The CSA program has three key components: (1) the Safety Measurement System (SMS), meant to identify high-risk carriers by using data from roadside inspections and crashes; (2) interventions, which are intended to help carriers address safety problems; and (3) the Safety Fitness Determination rule. In contrast to the one investigation intervention, the compliance review, available under FMCSA’s previous approach, FMCSA expects the CSA program to more effectively change unsafe behaviors by reaching and intervening with more potentially unsafe carriers earlier using a range of intervention types to enforce compliance with safety regulations. In 2014, we reported on the effectiveness of the SMS component of the CSA program. FMCSA conducted an Operational Model Test of the program from February 2008 through June 2010 in nine pilot states. In December 2010, FMCSA began implementing the CSA program in three phases: (1) implementation of SMS and some intervention types nationwide, (2) introduction of new investigative techniques and the Safety Management Cycle, and (3) nationwide rollout of all intervention types and FMCSA’s new investigative software, the Safety Enforcement Tracking and Investigation System (SENTRI). After a series of four commercial motor vehicle crashes—two involving buses and two involving trucks—that together resulted in 25 deaths and injuries to 83 people, the National Transportation Safety Board investigated and, in November 2013, made recommendations to improve the quality of FMCSA’s compliance review processes. The Secretary of Transportation tasked the Federal Aviation Administration, as a peer of FMCSA within DOT, to conduct a review and develop appropriate recommendations for DOT’s response to the National Transportation Safety Board. The Federal Aviation Administration formed an Independent Review Team, which in July 2014, issued a report that included a range of recommendations intended to support both incremental and transformative improvements to FMCSA’s compliance and enforcement programs. We discuss some steps FMCSA is taking to address these recommendations later in this report. Under the CSA program, FMCSA can select from a range of eight intervention types, intended to give FMCSA the flexibility to address motor carriers’ specific safety problems. Four of the intervention types were newly introduced under the CSA program; FMCSA had been applying the other four intervention types prior to the program (see table 3). Each type falls into one of the following intervention categories: early contact, investigation, and follow-on interventions. Before the CSA program, FMCSA used one investigation intervention type—the onsite compliance review—and three follow-on intervention types. Compliance reviews required investigators to examine every part of a carrier’s operations and were thus extremely resource intensive to conduct. As a result, FMCSA and its state partners investigated only about 3 percent of active carriers. According to FMCSA Motor Carrier Safety Progress Reports, federal or state investigators conducted between approximately 15,000 and 17,000 compliance reviews each year from fiscal year 2006 through fiscal year 2009. Under the CSA program, FMCSA has established a process for measuring the relative safety risk of carriers in seven Behavioral Analysis and Safety Improvement Categories (BASICs), prioritizing carriers based on risk, assigning an appropriate intervention type, and investigating or enforcing compliance with regulations (see fig. 1). The CSA intervention process has no set progression, and based on existing guidance, FMCSA applies one or more interventions depending on the circumstances of each case. For example, FMCSA may directly assign an onsite comprehensive investigation to a carrier without first assigning another type of intervention. FMCSA may also apply multiple interventions to a carrier over time, and as a result, common patterns in FMCSA’s application of CSA interventions can be identified. For example, if a carrier receives a warning letter as a first intervention and its safety performance does not improve, FMCSA may assign the carrier a second intervention, such as an onsite focused investigation. If FMCSA identifies violations during an investigation, such as the onsite focused investigation, that warrant enforcement, FMCSA may assign a third intervention, such as a notice of claim. In such a case, the resulting intervention pattern would be: (warning letter) → (onsite focused investigation) → (notice of claim). Every month, FMCSA uses SMS to generate percentiles in any of seven BASICs for carriers with sufficient data. However, as we reported in 2014, the SMS methodology contains limitations that reduce FMCSA’s ability to reliably assess carriers’ safety risks because FMCSA lacks sufficient safety performance information on the majority of carriers. We identified the lack of sufficient information as a particular limitation for carriers with few inspections and vehicles because their underlying violation rates can have artificially low or high values, or greater variability, which affects the precision of the BASIC percentiles used by FMCSA in comparing carriers to one another. Nonetheless, FMCSA uses these BASIC percentiles to identify potentially risky carriers, to prioritize them for intervention, and to automatically generate warning letters. FMCSA automatically prioritizes carriers into risk-based categories of escalating urgency based on the extent to which a carrier’s BASIC percentiles exceed certain combinations of designated thresholds, in addition to the carrier’s intervention history and unresolved violations. For example, under the new high-risk definition FMCSA adopted in March 2016, for a carrier to be considered high-risk, its BASIC percentile has to be above 90 in at least two of four BASICs—unsafe driving, crash indicator, hours-of-service compliance, or vehicle maintenance—and the carrier cannot have had an onsite comprehensive investigation in the last 18 months. In addition to changing the high-risk definition, FMCSA implemented a new prioritization approach with five risk-based categories, from most to least urgent: high-risk, moderate-risk, risk, warning letter, and monitor. According to FMCSA, high-risk carriers are the agency’s highest investigative priority. After FMCSA prioritizes carriers for intervention, FMCSA division managers decide which intervention type carriers should receive based on priority level, guidance, and carrier history among other things. The principal guidance document for assigning intervention types is the electronic Field Operations Training Manual (eFOTM). Although eFOTM includes some requirements, division managers have some discretion to select investigation and follow-on intervention types based on additional information, such as complaints received, significant crashes, and professional judgment. For example, FMCSA must investigate non- frivolous written complaints that allege substantial violations regardless of whether the carrier is prioritized for intervention, but have discretion to determine the most appropriate investigation type based on the safety problems associated with the complaint, the carrier’s BASIC percentiles, and the carrier’s history. However, for carriers identified as high-risk, FMCSA must conduct an onsite focused or onsite comprehensive investigation. Federal Safety Investigators and their state partners follow eFOTM guidance when conducting investigation and follow-on interventions. In April 2013, FMCSA introduced enhanced investigative techniques (EIT) that are intended to help investigators identify the root cause of a motor carrier’s safety problems. While financial and legal penalties are typically applied following an investigation, FMCSA may levy financial penalties against a carrier without an investigation if it believes there is sufficient evidence, such as evidence that the carrier operated after being placed out of service. Investigators may also request a change to the intervention type for some carriers when they find new and pertinent information that was not available at the time of the assignment. FMCSA’s information systems are critical to its data-driven enforcement and compliance programs and are intended to provide real-time access to data for the enforcement community, the transportation industry, stakeholders, and the general public. Field staff input intervention data through a variety of field information systems. These systems are operated on laptop computers in the field. For example, field staff use the Compliance Analysis and Performance Review Information (CAPRI) system to enter investigation intervention data, such as investigatory files and safety violations identified. Similarly, they use CaseRite to enter legal enforcement information. As previously discussed, FMCSA plans to introduce a new field information system called SENTRI, which will consolidate the legacy information systems that field staff use to upload information related to interventions. Once uploaded by field staff, intervention data are stored and analyzed on multiple central information systems. FMCSA staff may access CSA intervention data on these systems through a centralized portal and use the data to monitor carriers’ safety performance, among other things. For example, the Motor Carrier Management Information System (MCMIS) includes motor carrier performance data including inspection and investigation results, enforcement data, and state-reported crashes. FMCSA also uses the Enforcement Management Information System (EMIS) to monitor, track, and store data related to FMCSA enforcement actions, including follow-on interventions. FMCSA’s Analysis and Information Online system provides public access to descriptive statistics and analyses regarding commercial vehicle, driver, and carrier safety information. Although FMCSA implemented all four new CSA intervention types in pilot test states, the agency chose to delay implementing two of the new intervention types in the remaining states until it develops information technology (IT) software. Implementation in Pilot Test States: FMCSA implemented the entire range of CSA interventions—including all four new CSA intervention types—in nine pilot test states as part of the Operational Model Test that FMCSA conducted from February 2008 through June 2010. The test was intended to help the agency assess the four new intervention types and identify any features that needed to be adjusted prior to implementing them nationwide, among other things. According to FMCSA headquarters officials, personnel experienced challenges using multiple legacy information systems that were not designed to support FMCSA’s application of the expanded range of interventions under the CSA program. For example, FMCSA’s data analysts found it difficult to extract data from information systems needed to monitor and oversee the agency’s application of interventions. Implementation in Non-Pilot Test States: According to headquarters officials, in July 2010 FMCSA chose to delay implementing two of the four new CSA intervention types—offsite investigations and cooperative safety plans—in the remaining non-pilot test states until it completes its development of SENTRI software. However, FMCSA decided to implement the remaining two new intervention types—warning letters and onsite focused investigations—nationwide because they believed those two interventions were demonstrated as effective during the Operational Model Test and that delays would hinder safety benefits for the public (see fig. 2). The Operational Model Test evaluation found that offsite investigations demonstrated a similar pattern of effectiveness as onsite focused and onsite comprehensive investigations. FMCSA headquarters officials told us that developing and implementing SENTRI is important to help field staff and their state partners conduct their work. Field staff currently may use a variety of legacy information systems to apply and manage interventions. Principally, field staff use the CAPRI system to prepare for investigation interventions and to report their results. However, CAPRI was designed to support traditional compliance review investigations, not the expanded range of investigation types under the CSA program. According to FMCSA officials, this has resulted in field staff taking time-consuming additional steps to report their application of interventions. For example, some division administrators spent additional time reviewing how investigators entered information into CAPRI to determine the correct investigation type performed. According to officials, SENTRI is expected to help address these inefficiencies by consolidating investigative, follow-on, reporting, and other functions into a single interface. FMCSA officials also expect SENTRI to improve data consistency and enable better policy and program decisions through improved data quality. However, FMCSA has faced longstanding delays in developing SENTRI software as part of its broader IT modernization effort. In September 2005, FMCSA initiated a comprehensive overhaul of the way it collects, manages, and conveys safety information. The agency-wide modernization effort was intended to help FMCSA achieve its effectiveness and efficiency outcomes for the CSA program by centralizing FMCSA data and simplifying information access, among other things. According to FMCSA headquarters officials, FMCSA began obligating funds to develop SENTRI software in fiscal year 2009, when it established the business case for the system (see app. II). Since that time, we and the DOT’s Office of Inspector General have reported continuing project delays. FMCSA hired consultants to identify the causes of, among other things, its IT project delays and actions to remediate them. The resulting March 2013 report found a variety of underlying program challenges. For example, it found that ineffective IT governance practices provided limited visibility into the health of individual projects, contributing to project delays. It also found that the lack of an appropriately scoped and measurable strategy made it unclear whether current resources were effectively prioritized—a challenge that was compounded when priorities shifted on multiple occasions over time. Officials stated that FMCSA executed a contract in January 2016 to complete the agency’s IT modernization effort. As part of this effort, FMCSA plans to complete its development of SENTRI by April 2017. FMCSA’s application of interventions declined from fiscal year 2012 through fiscal year 2015, according to estimates provided by the agency. FMCSA implemented warning letters nationwide in fiscal year 2011, which resulted in a temporary spike in interventions. However, after this temporary increase, the number of interventions FMCSA applied was less in fiscal year 2015 than in fiscal year 2012 for each intervention type, with notable decreases in offsite investigations (73 percent) and notices of violation (71 percent). In addition, according to FMCSA’s estimates, about 26 percent fewer total investigation interventions were conducted in fiscal year 2015 compared to fiscal year 2012. See table 4 for detailed estimates of FMCSA’s application of interventions. Reasons for these notable decreases are discussed below. Offsite investigations: Division officials from each of the four pilot test states we interviewed told us they selected offsite investigations less frequently in more recent years, because the number of carriers that met eFOTM criteria for receiving them decreased over time. For example, officials from one division told us their use of offsite investigations decreased because motor carriers’ BASIC percentiles were typically too high or involved too many BASICs to qualify to receive an offsite investigation according to current eFOTM criteria. FMCSA headquarters officials told us they are focused on increasing the use of offsite investigations, because they believe offsite investigations were demonstrated as both efficient and effective. For example, in March 2016 FMCSA established a working group to explore modifying policy to give division managers more discretion in assigning offsite investigations. Notices of violation: Division officials from four of the eight divisions we interviewed told us they selected notices of violation infrequently because the intervention type was time-intensive to process compared to other intervention types or was not appropriate to address severe safety violations. FMCSA headquarters officials told us that investigators prefer to issue notices of claim instead, because they result in penalties to motor carriers. However, officials stated that investigators may not be aware of other associated FMCSA activities that increase the overall resources used to issue notices of claim. For example, notices of claim require additional legal oversight, which generally requires more resources. Headquarters officials said investigators may choose to issue notices of violation rather than notices of claim, when appropriate, if they better understood this context. Total investigation interventions: FMCSA headquarters officials told us that total investigation interventions declined because investigators spent more time conducting in-depth reviews of motor carriers’ safety management practices to identify the root causes of underlying safety problems as part of FMCSA’s EIT initiative. FMCSA implemented EIT in fiscal year 2013 as part of continuous improvement efforts and in response to Independent Review Team recommendations. According to officials, using the more time-intensive EIT approach decreased the number of investigations that FMCSA could conduct, particularly since 2012. FMCSA officials stated investigation counts have increased somewhat as investigators adjusted to the new EIT procedures. However, the officials did not expect investigation counts to return to previous levels without additional personnel. As FMCSA introduced an expanded range of intervention types under the CSA program, FMCSA did not redesign CAPRI or other legacy information systems to reflect these changes. For example, FMCSA did not redesign CAPRI to include a dedicated data element that would uniquely record the occurrence of each intervention type. Instead FMCSA developed algorithms—rules that can be applied by computer programs—that attempted to reconstruct the occurrence of each intervention type by identifying specific combinations of multiple data elements. Using legacy information systems for purposes for which they were not designed produced two main limitations that affected the accuracy of FMCSA intervention counts: Data recording limitations: FMCSA headquarters officials stated that the accuracy of CSA intervention counts depended in part upon how users recorded intervention data into FMCSA’s IT systems. For example, although all but 10 states did not implement offsite investigations, the CAPRI system nonetheless allowed investigators in these states to select “offsite” for the “review location” data element. According to FMCSA officials, this inflated counts when FMCSA used “review location” as one of multiple data elements to identify offsite investigations. Similarly, investigators may conduct onsite focused investigations on carriers that receive complaints, but CAPRI requires investigators to select either “complaint” or “focused CR” for the “review reason” data element. Because FMCSA’s algorithm used “review reason” as one of multiple data elements to count onsite focused investigations, this deflated onsite focused investigation counts when investigators selected “complaint” to conduct them. Evaluation Limitations: FMCSA officials told us they occasionally modified algorithms used to identify the occurrence of intervention types, but generally did not evaluate how the modifications affected the accuracy of intervention counts. According to officials, they modified algorithms for a variety of reasons, such as when the agency changed how it recorded intervention data. Once modified, FMCSA applied the most recent algorithm to all previous data—including historical data. For example, in January 2016, FMCSA removed a redundant data element from the algorithm used to count investigation interventions, a step that changed historical intervention counts. FMCSA officials told us they did not know the extent to which applying the modified algorithm to previous data affected the accuracy of historical counts, because they generally did not evaluate the modification’s effect on count accuracy before applying it. Although the extent of the effect is unknown, even small differences could limit the ability of FMCSA managers to accurately and effectively monitor trends in FMCSA’s application of CSA interventions over time. FMCSA headquarters officials told us that SENTRI is intended to address the underlying IT challenges that limit the accuracy of CSA intervention counts by creating a dedicated data element that uniquely records the occurrence of each intervention type. Developing SENTRI in a timely manner is particularly critical, because data-driven targeted enforcement is FMCSA’s primary strategy for meeting its safety goals and further delays represent missed opportunities for FMCSA to accurately monitor and improve the CSA program. Moreover, unresolved data limitations would continue to preclude outside entities, such as auditing entities, from assessing the integrity of agency information, including the completeness and accuracy of computer-generated counts. In May 2016, we initiated a review to determine the extent to which FMCSA has evaluated the effectiveness of selected IT systems, and to assess the extent to which FMCSA has implemented an IT governance structure and plan to complete this work by June 2017. In addition, as we discussed above, FMCSA currently estimates that it will complete SENTRI development in April 2017. In light of our and FMCSA’s ongoing work in this area, we are not making a recommendation on this matter in this report. In its Strategic Plan: Fiscal Years 2015–2018, FMCSA identified the improved effectiveness and efficiency of CSA interventions as strategic outcomes. FMCSA has evaluated both of these strategic outcomes, but its evaluations had limitations. Specifically, FMCSA’s effectiveness evaluations did not produce sufficiently complete, appropriate, and accurate information on individual intervention types because of design and methodological limitations. Additionally, FMCSA’s efficiency evaluation is no longer current, because FMCSA has not taken steps to update the evaluation’s cost estimates, despite changes in the time and resources required to conduct CSA interventions. FMCSA’s Strategic Plan: Fiscal Years 2015–2018 identifies improved effectiveness as a strategic outcome for CSA interventions. According to FMCSA, the agency conducts regular evaluations to determine how effectively its programs are achieving their effectiveness and other intended outcomes. To evaluate the effectiveness of CSA interventions specifically, FMCSA developed a statistical model intended to annually evaluate the combined effects of all of its interventions. According to FMCSA, the model is a revised version of a prior model that FMCSA used to evaluate the effectiveness of compliance reviews, before the implementation of the CSA program added new intervention types. FMCSA has also evaluated intervention effectiveness in other studies, but according to officials, the new annual model is the agency’s primary method of assessing intervention effectiveness. In a January 2015 report, the annual effectiveness model estimated the combined effect of four CSA intervention types on the crash rates of carriers in four size groups from fiscal year 2009 through fiscal year 2011. To assess effectiveness, the model estimated the change in group crash rates before and after carriers received one or more interventions, compared to the change in a comparison group of carriers that did not receive an intervention. This design accounted for the effects of some external factors that also could have influenced group crash rates, such as broad changes in weather or economic conditions. When used appropriately, a comparison group design is a key strength of a model, such as the one used by FMCSA, as is the use of statistical inference to evaluate the certainty of the model’s results. Federal standards for internal control state that agencies should use quality information to determine the extent to which they are achieving their intended program outcomes. Characteristics of quality information include complete, appropriate, and accurate information that helps management make informed decisions and evaluate the entity’s performance in achieving strategic outcomes. Because, according to headquarters officials, the annual effectiveness model is the primary method FMCSA uses to evaluate CSA intervention effectiveness and FMCSA intends to use the model on a recurring basis, we conducted a detailed assessment of the model. As discussed below, we identified several design and methodological limitations in FMCSA’s annual effectiveness model, including the design of its comparison groups, a design limitation that can impact the quality of information that the model produces (see app. III for our complete assessment). FMCSA’s report concluded that applying at least one intervention reduced crash rates for three out of four carrier size groups in fiscal year 2009 and fiscal year 2011 and provided positive safety benefits. The report also concluded that warning letters independently reduced group crash rates and increased safety benefits. However, FMCSA’s annual evaluation did not provide sufficiently complete information on intervention effectiveness because the evaluation did not assess all intervention types or intervention patterns that FMCSA commonly applies. Specifically, the evaluation did not explicitly measure follow-on interventions, including cooperative safety plans, notices of claim, and notices of violation. According to the January 2015 report, the evaluation did not include cooperative safety plans because MCMIS data for that intervention were not consistently complete or accurate. FMCSA officials told us that the model does not specifically exclude notices of claim and notices of violation, but rather simply does not distinguish between investigations that result in follow-on interventions and those that do not. According to FMCSA, the model does not make this distinction because FMCSA typically applies follow-on interventions, such as notices of claim, within 90 days of conducting an investigation, based on the investigation’s findings. We determined that FMCSA’s model cannot analyze such intervention patterns, because the agency designed it to identify only a carriers’ first intervention in a fiscal year. As a result, FMCSA does not have information on the unique effectiveness of its specific follow-on interventions, which are critical for enforcing regulatory compliance. FMCSA could potentially increase the breadth of analysis for its model by using a design similar to the evaluation of FMCSA’s Operational Model Test, conducted by UMTRI in August 2011. That evaluation identified common patterns of interventions with sufficient data for analysis and then estimated each pattern’s effectiveness. The combined estimates supplemented those for individual intervention types. The annual evaluation does not provide information that appropriately reflects how FMCSA designed and implemented CSA interventions. The agency designed CSA interventions to replace the resource-intensive “one-size-fits-all” compliance review with a range of intervention types intended to better address safety problems specific to individual carriers. According to FMCSA’s Strategic Plan: Fiscal Years 2015–2018, the agency’s strategy to reduce the number of unsafe and high-risk carrier behaviors is to create and apply appropriate interventions. As we previously discussed, FMCSA’s application of individual intervention types depends upon a combination of eFOTM rules and managers’ day- to-day discretion. For example, according to eFOTM, managers generally assign offsite investigations when a carrier has percentiles above intervention thresholds in three or fewer BASICs. However, managers have the discretion to apply an onsite focused investigation instead, based on the carrier’s circumstances. Additionally, FMCSA may apply multiple intervention types for the same carrier over time, resulting in commonly observed intervention patterns. For example, one official told us that most carriers that FMCSA investigates have, at one time, been issued a warning letter. In contrast to the design and implementation of CSA interventions, FMCSA’s model does not include an assessment of either individual intervention types or common intervention patterns. Instead, the model estimates the impact of all interventions combined that were performed during a 12-month period being measured. Recommended practices for program evaluation recommend that program managers attempt to separately evaluate multiple types of program activities that seek to achieve a common outcome–in this case, multiple intervention types that seek to improve carrier safety performance. Consistent with these practices, FMCSA’s Strategic Plan: Fiscal Years 2015–2018 calls for the agency to use data to make smarter day-to-day decisions and to determine the impact that various rules have on decreasing crashes, injuries, and fatalities by conducting regular program evaluations and effectiveness reviews. Therefore, to provide information appropriate to the design and implementation of CSA interventions, an evaluation should assess how effectively each intervention type or common intervention patterns addressed the safety problems of the carriers that received them. This specific information could help FMCSA identify the circumstances under which different types of interventions are effective and help managers optimize their choice of interventions on a day-to-day basis as the agency implements the program. Although officials told us that FMCSA designed the model to measure the cumulative effect of FMCSA contact with carriers through CSA interventions and not to analyze individual intervention types, FMCSA used the model to draw conclusions about the safety benefits of warning letters, one of the most common intervention types used according to FMCSA estimates. Specifically, FMCSA concluded that “this suggests that the warning letter in and of itself can be an effective tool for improving motor carrier safety.” However, as accepted practices for designing evaluations explain, quality evaluations should draw conclusions commensurate with the power of the design. Because FMCSA did not evaluate the separate effect of warning letters, it lacks specific analytical evidence to support its conclusion. As a result, FMCSA lacks quality information needed to estimate how each intervention type, including warning letters, or common intervention patterns affect motor carrier safety performance and address carriers’ specific safety problems. FMCSA headquarters officials stated that they were unsure if it was possible to measure the effects of individual intervention types or common intervention patterns because the quantity of available data may not be sufficient to produce reliable results. However, as previously discussed, FMCSA’s August 2011 evaluation of the CSA Operational Model Test, conducted by UMTRI, assessed the effectiveness of each intervention type and common intervention patterns for carriers that received multiple interventions. An evaluation of these effects was possible for the Operational Model Test, despite the fact that UMTRI had less data available than FMCSA would generate after nationwide implementation of the CSA program. According to the study, UMTRI assessed such patterns to provide a more detailed look at the effectiveness of the interventions, in light of how FMCSA actually applied them in the field. Furthermore, in March 2016, FMCSA conducted separate evaluations of how two specific intervention types—onsite focused and offsite investigations—influenced carriers’ BASIC percentiles in calendar years 2011 and 2012. By combining multiple years of intervention data, as FMCSA did in these evaluations, rather than using a fiscal year construction, which officials told us FMCSA used in the annual evaluation for administrative reasons, FMCSA could potentially overcome data sufficiency limitations (see app. III). FMCSA and independent evaluators have identified a need for this level of detailed information in the management and implementation of interventions. For example, an internal FMCSA working group determined that the agency needed a more detailed understanding of the effectiveness of onsite focused investigations to empower investigators to select the most appropriate intervention to change carrier behavior. Additionally, in its 2014 assessment, the Independent Review Team that DOT tasked with reviewing FMCSA’s compliance review processes identified a need for FMCSA to perform consistent, detailed, evaluations of effectiveness by enforcement tool, such as intervention types. Without detailed evaluations, the team said that FMCSA would be unable to focus resources on using its most effective tools or to reconfigure tools that are not meeting the agency’s goals. Nonetheless, as discussed earlier in this report, FMCSA’s ability to accurately identify specific intervention types is limited. Accepted practices for designing evaluations state that quality evaluations should rely on credible data that are sufficiently free of errors that could lead to inaccurate conclusions. Taking steps to reliably and accurately identify each intervention type in the data used to support its evaluations would help FMCSA conduct evaluations that produce information appropriate to the design and implementation of CSA interventions. Uses and Characteristics of Comparison Groups A comparison group, in the context of typical designs for evaluating program effectiveness, represents what would have happened in the absence of a program and is used to rule out alternative explanations for changes in outcomes. In a truly randomized experiment, this w ould be the control group. In a quasi- experimental evaluation, like FMCSA’s annual effectiveness evaluation, w here participants (i.e., carriers) are not sorted randomly into groups, the comparison group should be constructed to be as similar as possible to the group being influenced by the program (i.e., carriers receiving interventions), in order to draw strong conclusions about the effects of the program. The groups should be similar enough that any difference in outcome can be plausibly attributed to the intervention type being evaluated. FMCSA’s annual and separate effectiveness evaluations had methodological limitations that limited their ability to accurately attribute changes in carrier safety behavior solely to interventions. Because of these limitations, FMCSA may not have accurately accounted for factors other than FMCSA’s interventions that could be responsible for the outcomes observed. Most notably, FMCSA did not consistently use a comparison group design, which compares outcomes among carriers that did and did not receive interventions, for its effectiveness evaluations. When FMCSA did use this design, it constructed comparison groups that did not sufficiently account for external factors that could affect group crash rates. According to recommended practices for designing program evaluations, comparison group designs are typical for assessing program effectiveness, because they can isolate a program’s unique effects when the comparison groups are sufficiently similar to the groups affected by a program. See appendix III for a complete assessment of the limitations we identified in FMCSA’s annual effectiveness evaluation, along with accepted practices that could help FMCSA to address them. FMCSA’s separate evaluations of onsite focused and offsite investigation effectiveness did not use a comparison group and so, as previously discussed, did not account for external factors that could have influenced changes in motor carriers’ BASIC percentiles. Officials stated that they determined that a comparison group method was not appropriate for FMCSA’s evaluation of onsite focused investigations because the officials were concerned that a comparison group would overstate the effectiveness of onsite focused investigations due to the differing safety profiles of carriers that receive each intervention type. FMCSA headquarters officials stated that they chose to use the same methodology as the onsite focused investigation effectiveness evaluation for the separate evaluation of offsite investigation interventions. Although FMCSA did use a comparison group approach in its annual effectiveness evaluation, we identified limitations with FMCSA’s approach that affect the model’s ability to accurately attribute changes in crash rates to interventions. For example, FMCSA’s comparison group was observed over a different measurement period from the carriers that received interventions, so that the two groups were not perfectly matched on broad changes, such as weather and economic changes. By not matching the measurement period, FMCSA’s use of a comparison group was limited in its ability to account for external factors, as intended. Additionally, in the 2015 evaluation, FMCSA’s comparison groups held constant the effects of carrier size, but did not hold constant key factors that could influence intervention outcomes, such as pre-intervention safety behaviors as measured by regulatory violations or BASIC percentiles. FMCSA headquarters officials said that comparison groups in the model accounted only for carrier size because of data limitations and their concern that accounting for too many additional variables would reduce the power of the model. Specifically, officials said that FMCSA does not currently have sufficient data to assess the effectiveness of agency interventions for motor carriers with different characteristics, such as the total number of miles a carrier’s vehicles travel per year. However, FMCSA has previously used the data it has available to hold constant important factors external to the program and attribute changes in outcomes to its interventions with greater accuracy (see app. III). For example, FMCSA’s August 2011 evaluation of the Operational Model Test, conducted by UMTRI, matched groups of carriers that did and did not receive various types of interventions on several key characteristics, such as the distributions of pre-intervention crash rates and BASIC percentiles. Without using a robust comparison group or a similar method, FMCSA cannot accurately determine whether changes in motor carrier safety performance are a result of interventions or whether other factors are responsible. Without more complete, appropriate, and accurate information on the effectiveness of individual CSA intervention types, FMCSA lacks information it needs to make informed decisions and evaluate its performance in achieving its effectiveness outcome for CSA interventions. Additionally, FMCSA’s ability to accurately identify specific intervention types by analyzing MCMIS and EMIS data is limited due to the data limitations which we described earlier in this report. Without taking steps to address these limitations, FMCSA cannot accurately evaluate how effectively CSA interventions improve motor carriers’ safety performance. As with its effectiveness outcome, FMCSA identified improved efficiency in its Strategic Plan: Fiscal Years 2015–2018 as one of its strategic outcomes for CSA interventions. We have previously reported that agencies should measure program efficiency by considering the relationship between two elements: (1) inputs, such as costs or hours worked, and (2) desired results, such as a program’s effect on conditions or behaviors. In the past, FMCSA evaluated the efficiency of CSA interventions using both of these elements. Specifically, UMTRI’s August 2011 evaluation estimated the average cost of conducting individual intervention types and measured the effects on carrier safety performance of those CSA intervention types, as well as common intervention patterns, in four pilot test states during an 8-month period from October 2008 through May 2009. Since the UMTRI evaluation, FMCSA has continued to evaluate the effectiveness of CSA interventions. However, as we describe below, FMCSA has not taken similar steps to update its cost information—information FMCSA would need to understand the relationship between both efficiency elements. The UMTRI evaluation developed average cost estimates for each intervention type by considering four cost variables: labor hours, government miles traveled, vouchers, and other expenses. The evaluation studied 920 interventions applied to 586 carriers, with the number of carriers receiving each intervention type ranging from 6 carriers for notices of violation to 249 carriers for onsite focused investigations. The evaluation concluded that cooperative safety plans and notices of violation had the lowest average estimated costs—ranging from $95 to $118 respectively. Onsite comprehensive investigations and onsite focused investigations had the highest estimated costs, averaging $1,038 and $677 respectively. Since UMTRI conducted its August 2011 efficiency evaluation, FMCSA has continued to use the results to report and estimate the efficiency of interventions. For example, in its Budget Estimates: Fiscal Year 2017 report, FMCSA requested $2.5 million and 50 additional Program Analysts to complete the last phase of the CSA program, including nationwide implementation of offsite investigations. To support this request, FMCSA stated that offsite investigations were “extremely efficient” and specifically cited the evaluation’s cost estimate. Similarly, FMCSA headquarters officials told us they currently use the UMTRI evaluation to estimate efficiency and to understand the relative costs of individual intervention types. However, cost estimates from UMTRI’s August 2011 evaluation are no longer current, because the time and resources needed to conduct interventions has changed and the evaluation’s estimates are not representative of all states. The evaluation’s estimates are no longer current because the time and resources needed to conduct interventions has changed. For example, in April 2013, FMCSA implemented a substantial change to the way investigators conduct investigations, called EIT, which is intended to help identify the root cause of motor carriers’ safety problems. FMCSA headquarters officials told us that, although using EIT takes additional time, it results in improved motor carrier safety performance. The evaluation’s estimates do not represent costs in all states. Specifically, the evaluation stated that its estimates pertain only to the four states studied. Thus the evaluation’s estimates may not appropriately represent the average costs associated with applying interventions in the remaining 46 states, the District of Columbia, and Puerto Rico. Federal standards for internal control state that agencies should use quality information to achieve the agency’s intended program outcomes. Quality information includes information that is current. FMCSA headquarters officials told us that they have not taken steps to update the cost estimates from the UMTRI evaluation to determine current resources used in all states to conduct each intervention type because they believed that FMCSA policy and guidance were sufficiently well designed to enable division managers to select the least resource-intensive intervention type necessary to correct a carrier’s safety problem. In March 2015, FMCSA’s then-Acting Administrator testified that given FMCSA’s limited resources relative to the size of the regulated motor carrier population, it is imperative for FMCSA to apply its resources efficiently. However, without cost estimates for CSA interventions that are current and representative of all states, FMCSA lacks information it needs to understand the most efficient methods of conducting CSA interventions in all states. Because FMCSA lacks current cost information, it also cannot evaluate or understand the relationship between these costs and the effectiveness of CSA interventions. FMCSA has taken steps intended to improve the effectiveness and efficiency of CSA interventions—strategic outcomes—principally by establishing a working group to address these issues and implementing some of the group’s recommendations. However, FMCSA has not established performance measures to monitor progress toward achieving its intended efficiency outcome for interventions. Without establishing measures for both outcomes, FMCSA is and will be limited in its ability to monitor program performance and to balance these two priorities, as needed. In April 2014, FMCSA formed a Continuous Improvement Working Group (CIWG) tasked with improving the effectiveness and efficiency of the CSA program, including CSA interventions. Comprised of FMCSA staff from divisions, service centers, and headquarters, as well as state partners, the CIWG’s objective was to assess the agency’s intervention and prioritization processes and to recommend improvements that increase program effectiveness and efficiency. To develop its recommendations, the working group reviewed available intervention data—such as investigation reports—and assessed current intervention practices by surveying and interviewing field staff, among other things. In February 2015, the CIWG made 20 recommendations intended to achieve its effectiveness and efficiency objectives. According to FMCSA officials, the agency had implemented 12 of these recommendations as of April 2016 and was working to implement the remaining 8 recommendations. Implemented recommendations include: Changing FMCSA’s high-risk definition and prioritization criteria: In March 2016, FMCSA adopted a change to its definition of high-risk carriers and the criteria it used to prioritize carriers for intervention. For example, as discussed above, for a carrier to be defined as high-risk under FMCSA’s new criteria, it has to exceed higher percentile thresholds than previously used in any of at least two of four specified BASICs. According to FMCSA, this decreases the total number of high-risk carriers but better identifies carriers at a high-risk for crashes and will allow investigators to investigate higher-risk carriers sooner, using current resources. However, FMCSA’s continued reliance on BASIC percentiles supported by insufficient safety data, which we identified in our 2014 report and discussed above, will limit its ability to effectively identify and prioritize the highest-risk carriers for intervention. Changing criteria to receive warning letters: In January 2016, FMCSA expanded the criteria it used to determine which motor carriers receive warning letters in an effort to reach more carriers and, according to FMCSA, to prevent further non-compliance before a more intensive intervention type becomes necessary. Specifically, the CIWG recommended that FMCSA send warning letters to carriers with BASIC percentiles above the intervention threshold in more BASICs than previously allowed, while shortening the time period carriers have after receiving a warning letter to improve their safety performance before being prioritized for an investigation intervention. The CIWG projected that implementing the change would increase the number of warning letters that FMCSA issued by 30 percent; however, the CIWG noted that issuing a warning letter that is not effective may merely postpone the eventual necessity of a carrier receiving an investigation intervention and that issuing more frequent warning letters could dilute their effectiveness. Establishing intervention quality review procedures: In March 2016, FMCSA issued two memorandums requiring field staff to use tools that FMCSA developed to evaluate and improve the quality of some intervention types. Specifically, FMCSA developed one tool to measure the extent to which investigators appropriately and accurately conducted onsite focused and comprehensive investigations and completed required documentation. Division managers are expected to use the tool to evaluate a selected sample of investigation reports quarterly, identify areas of needed improvement, and provide training to improve report consistency and quality. Similarly, FMCSA developed another tool that measures the extent to which investigators’ enforcement cases for notices of claim and notices of violation include sufficient documentation to meet evidentiary requirements. The memorandum requires division managers to ensure that each notice is reviewed to identify areas for improvement and ensure that enforcement cases are properly completed. According to FMCSA officials, FMCSA intends to use the results of these evaluations to identify training or policy clarifications needed to continuously improve the application and effectiveness of each intervention performed. As previously discussed, FMCSA’s Strategic Plan: Fiscal Years 2015– 2018 identified improved effectiveness and efficiency as strategic outcomes of CSA interventions. FMCSA headquarters officials told us that effectiveness and efficiency are complementary outcomes that FMCSA strives to balance. For example, according to officials, while using EIT requires more time and decreases the number of investigations that FMCSA can conduct (i.e., efficiency), it also increases investigation quality (i.e., effectiveness). Thus, senior FMCSA officials stressed the importance of considering both effectiveness and efficiency in any set of measures used to monitor interventions, and stated that without treating these two outcomes as parts of a whole, FMCSA cannot achieve its goals for CSA interventions. FMCSA has established some measures for its effectiveness outcome, and monitors these measures on an annual and ongoing basis. While we identified several limitations with the design and methodology FMCSA used in its effectiveness evaluation above, FMCSA has established a measure for intervention effectiveness—crash rates—and annually monitors the agency’s performance against that measure. Officials told us that FMCSA also monitors effectiveness using investigation outcome measures, such as violation rates and safety ratings, on an ongoing basis. However, FMCSA has not established measures to monitor progress toward achieving its efficiency outcome. According to headquarters officials, FMCSA considers the efficiency outcome to include two dimensions: (1) the number of carriers FMCSA reaches through interventions and (2) the resources required for FMCSA to conduct interventions, including the time and travel required to complete investigations. While officials stated that FMCSA monitors some information related to efficiency, such as the number of investigations completed and investigative outcomes, officials acknowledged that FMCSA has not formally established measures for its efficiency outcome. Leading practices for performance management state that agencies should express outcomes in a measurable form and establish a set of performance measures that help monitor progress toward achieving each outcome. Additionally, our work has shown that agencies should create a set of performance measures that addresses important dimensions of program performance and balances competing priorities to increase the usefulness of performance plans in guiding decisions. While our past work has identified challenges some federal agencies face developing and using outcome-based efficiency measures, it has also highlighted the importance of developing such measures. Because FMCSA does not have a complete set of measures that reflects both its effectiveness and efficiency outcomes for CSA interventions, FMCSA managers lack benchmarks needed to regularly monitor progress toward achieving the outcomes. FMCSA also lacks information needed to balance these priorities and guide management decisions about FMCSA’s application of interventions. FMCSA’s limited resources and the increase in crashes involving motor carriers in recent years highlight the importance of ensuring that FMCSA regularly measures and monitors progress toward achieving both of these strategic outcomes. Fatalities involving motor carriers have increased—rising from about 4,200 in 2011 to almost 4,500 in 2015—and interventions can play a critical role in reversing this troubling trend. FMCSA aims to reduce such fatalities by using a data-driven approach that identifies and intervenes with the highest-risk motor carriers early. To monitor its performance, FMCSA has identified improved effectiveness and efficiency as strategic outcomes for CSA interventions and has taken some steps to improve agency performance in these areas. For example, since March 2016, FMCSA has required field staff to use tools that FMCSA developed to evaluate and improve the quality of onsite investigations and two follow- on interventions. However, we identified important limitations in the information that FMCSA used to evaluate the effectiveness and efficiency of interventions. For example, FMCSA’s effectiveness evaluations did not produce sufficiently complete, appropriate, and accurate information on individual intervention types or common intervention patterns, because of design and methodology limitations. Although FMCSA officials expressed concern about potential sample size limitations when evaluating the effectiveness of individual intervention types or common intervention patterns, the August 2011 UMTRI evaluation that FMCSA sponsored as part of its Operational Model Test demonstrated that such evaluations are feasible. Similarly, FMCSA uses cost estimates from UMTRI’s evaluation to understand the efficiency benefits of interventions. However, the evaluation’s cost estimates are no longer current because the time and resources needed to conduct interventions have changed and are not representative of costs nationwide. Without current cost estimates that are representative of nationwide costs, FMCSA lacks information it needs to understand the most efficient methods of conducting CSA interventions and cannot assess the relationship between these costs and intervention effectiveness. Moreover, long-standing delays in developing SENTRI software have compromised the quality of intervention information, thereby limiting FMCSA’s ability to accurately and effectively monitor trends in its application of interventions over time and to evaluate intervention effectiveness. As FMCSA continues its efforts to address data limitations that affect the accuracy of intervention information, it is important that FMCSA not delay taking steps to improve how it currently evaluates the effectiveness and efficiency of CSA interventions by ensuring, for example, that its annual effectiveness evaluation addresses other limitations we have identified. FMCSA has dedicated significant resources to transition from a costly one-size-fits-all approach to a range of more effective and efficient interventions. However, without improving the quality of information that FMCSA uses to evaluate its performance, the agency will continue to lack the information it needs to determine the extent to which it is achieving these fundamental programmatic improvements. In addition, we found that although FMCSA has established some performance measures for its effectiveness outcomes, the agency has not established measures to monitor progress toward achieving its efficiency outcome. FMCSA needs information on all dimensions of its effectiveness and efficiency outcomes to balance these priorities and guide management decisions about its application of interventions. To determine whether CSA interventions influence motor carrier safety performance, the Secretary of Transportation should direct the FMCSA Administrator to: Identify and implement, as appropriate, methods to evaluate the effectiveness of individual intervention types or common intervention patterns to obtain more complete, appropriate, and accurate information on the effectiveness of interventions in improving motor carrier safety performance. In identifying and implementing appropriate methods, FMCSA should incorporate accepted practices for designing program effectiveness evaluations, including practices that would enable FMCSA to more confidently attribute changes in carriers’ safety behavior to CSA interventions. To understand the efficiency of CSA interventions the Secretary of Transportation should direct the FMCSA Administrator to: Update FMCSA’s cost estimates to determine the resources currently used to conduct individual intervention types and ensure FMCSA has cost information that is representative of all states. To enable FMCSA management to monitor the agency’s progress in achieving its effectiveness and efficiency outcomes for CSA interventions and balance priorities, the Secretary of Transportation should direct the FMCSA Administrator to: Establish and use performance measures to regularly monitor progress toward both FMCSA’s effectiveness outcome and its efficiency outcome. We provided a draft of this report to the DOT for review and comment. DOT provided written comments, which are reprinted in appendix IV. In its written comments, DOT concurred with our recommendations. DOT also described actions that FMCSA has taken to improve the CSA program and noted that CSA interventions have been shown to effectively improve motor carriers’ safety behavior. As stated in this report, the evaluations that FMCSA uses to assess intervention effectiveness did not produce sufficiently complete, appropriate, and accurate information on individual intervention types because of design and methodological limitations that limited FMCSA’s ability to accurately attribute changes in carriers’ safety behavior solely to interventions. We believe that identifying and implementing appropriate methods to address these limitations will help to provide FMCSA with information it needs to evaluate its performance in achieving its effectiveness outcome for CSA interventions. In addition, DOT provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of Transportation, and the Administrator of FMCSA. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objectives of our report were to examine: (1) the extent to which the Federal Motor Carrier Safety Administration (FMCSA) has implemented Compliance, Safety, Accountability (CSA) interventions and how it has applied them; (2) the extent to which FMCSA has evaluated the effectiveness and efficiency of CSA interventions; and (3) any steps that FMCSA has taken to improve and monitor progress toward achieving its intended outcomes for CSA interventions. To determine the extent to which FMCSA has implemented CSA program interventions and how it applied them, we analyzed FMCSA intervention data from fiscal year 2010 through fiscal year 2015, the most recent fiscal year for which intervention information was available. Specifically, we analyzed data from two FMCSA data systems: (1) the Motor Carrier Management Information System (MCMIS) and (2) the Enforcement Management Information System (EMIS). If we determined that MCMIS and EMIS data were sufficiently reliable, we intended to then analyze whether there were any notable increases, decreases, or other trends in FMCSA’s application of interventions—across states, regions, and motor carrier types (e.g., fleet size). We also intended to determine common intervention patterns when carriers receive multiple interventions (e.g., warning letter, then off-site investigation, then notice of violation). To determine the reliability of FMCSA data we requested a complete set of all MCMIS and EMIS data from fiscal year 2010 through fiscal year 2015. To develop our request, we conducted interviews with cognizant FMCSA officials as well as officials from the Volpe National Transportation Systems Center, which provides technical support to FMCSA’s data analysis. We would typically review documentation to prepare a data analysis plan; however, FMCSA could not provide up-to- date or complete data dictionaries or other reference documents for these data systems. Thus, we requested and FMCSA provided sample data tables and variables that we could use to identify the occurrence of each intervention type by analyzing MCMIS and EMIS data. Using the information that FMCSA provided, we performed electronic data testing to count how frequently FMCSA conducted each intervention type in fiscal year 2010 through fiscal year 2015. We then compared the results of our analysis against FMCSA-published sources to determine if the results included obvious errors or outliers. While we replicated FMCSA’s counts for warning letters and unsatisfactory/unfit out-of-service orders in all fiscal years, our comparison revealed differences in at least one fiscal year for all remaining intervention types. We subsequently met with FMCSA data analysis officials on several occasions to identify the cause of the differences that we identified. FMCSA officials told us that when they modified the algorithms used to count interventions, the agency applied the most recent algorithm to all previous data—including historical data—thereby changing the way that FMCSA counted interventions over time. As a result, FMCSA officials told us that we could not validate the results of our analysis against agency totals. After evaluating the reliability of these data for our analytical and reporting purposes, we concluded that the data were of undetermined reliability, because data limitations prevented an adequate and comprehensive assessment. This precluded our analysis of trends in FMCSA’s application of interventions across states, regions, and motor carrier types. As a substitute, we requested that FMCSA provide estimates for how frequently it applied each intervention type from fiscal year 2010 through fiscal year 2015 to identify general trends. After reviewing FMCSA documentation related to the estimates and interviewing responsible FMCSA officials, we determined that FMCSA’s estimates were sufficiently reliable for this purpose. We reviewed relevant regulations and FMCSA guidance and policy documents to identify how FMCSA should implement and apply interventions. For example, we reviewed the electronic Field Operations Training Manual (eFOTM), which is the principal guidance document for assigning intervention types. In addition, we interviewed FMCSA division officials in eight states that we selected based upon their participation in FMCSA’s Operational Model Test of the CSA program, geographic location, and program size, among other factors. Selected states included: Georgia Illinois Kansas Maryland Massachusetts Montana Oklahoma Texas We selected these states to get a range of perspectives on FMCSA’s application of interventions. For example, four of the states participated in FMCSA’s Operational Model Test and thus had experience implementing all eight CSA intervention types. Similarly, we selected two states from each service center. Although the information obtained from our interviews with officials from the selected states is not generalizable to all states or FMCSA divisions, it provided illustrative examples of how FMCSA is applying interventions as well as the perspectives of officials knowledgeable about the program. In addition, we interviewed FMCSA officials from each service center—including FMCSA’s Eastern, Midwestern, Southern, and Western Service Centers—as well as headquarters officials from FMCSA’s Office of Enforcement, Office of Field Operations, and Office of Research and Information Technology. We also interviewed industry stakeholders, such as the Commercial Vehicle Safety Alliance, American Trucking Associations, Trucking Alliance, and the Owner-Operator Independent Drivers Association to gain their perspectives on FMCSA’s intervention and enforcement activities. To determine the extent to which FMCSA has evaluated the effectiveness and efficiency of CSA interventions, we intended to conduct our own effectiveness evaluation to determine how interventions affect motor carrier safety and illustrate the strengths and limitations of particular evaluation designs. However, because MCMIS and EMIS data were of undetermined reliability, we instead reviewed the four evaluations the agency has conducted. Specifically, we reviewed: The University of Michigan Transportation Research Institute, Evaluation of the CSA 2010 Operational Model Test (August 2011); FMCSA, FMCSA Safety Program Effectiveness Measurement: Carrier Intervention Effectiveness Model, Version 1.0: Summary Report for Fiscal Years 2009, 2010, 2011 (January 2015); FMCSA, Analysis Brief: Effectiveness of Onsite Focused Investigations (March 2016); and FMCSA, Effectiveness of Offsite Investigations: Preliminary Analysis (March 2016) We conducted a more detailed assessment of the second report on FMCSA’s annual effectiveness evaluation model because, according to FMCSA headquarters officials, it is the primary method FMCSA uses to evaluate intervention effectiveness and FMCSA intends to use it on a recurring basis. In addition, we reviewed FMCSA policy documents— such as FMCSA’s Strategic Plan: Fiscal Years 2015–2018 and eFOTM guidance—to determine how FMCSA used the information produced by each of its four evaluations. We also interviewed FMCSA headquarters officials responsible for developing policy and conducting data analysis as well as officials from the Volpe National Transportation Systems Center responsible for designing and conducting some FMCSA evaluations. To assess FMCSA’s effectiveness evaluations, we consulted GAO’s guidance on designing program evaluations, which describes accepted practices for evaluating program effectiveness based on GAO studies, policy documents, and program evaluation literature. We also consulted federal standards for internal control for using quality information, and drew on internal methodological expertise to assess the extent to which the designs, implementation, and results of FMCSA’s evaluations met quality information standards. To determine any steps that FMCSA has taken to improve and monitor progress toward achieving its intended outcomes for interventions, we reviewed relevant FMCSA strategic planning and policy documents that established such outcomes, principally FMCSA’s Strategic Plan: Fiscal Years 2015–2018. We reviewed reports from external entities that included recommendations intended to support improvements to FMCSA’s compliance and enforcement programs, such as the Independent Review Team’s July 2014 report and a March 2014 report from the Department of Transportation’s Office of Inspector General. Similarly, we reviewed FMCSA’s Continuous Improvement Working Group’s February 2015 report that included recommendations intended to achieve FMCSA’s effectiveness and efficiency outcomes. We also interviewed responsible division, service center, and headquarters officials to identify any steps FMCSA has taken to monitor or improve the effectiveness or efficiency of interventions as well as to determine their perspectives on the effects of these steps. For example, we interviewed responsible officials at headquarters who develop policy and oversee adherence to federal regulations by interstate motor carriers across the country to understand how they monitor or improve interventions. We also interviewed service center and division officials to understand the field’s involvement in FMCSA’s improvement activities. We compared the results of our documentary review and interviews against leading practices for performance management as well as key attributes of successful performance measures identified in our prior body of work. Although GPRAMA’s requirements apply at the departmental level (e.g., the Department of Transportation), we have previously stated they can serve as leading practices at other organizational levels, such as component agencies, offices, programs, and projects. We conducted this performance audit from July 2015 to October 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on the audit objectives. Headquarters officials told us that the Federal Motor Carrier Safety Administration (FMCSA) obligated funds to develop Safety Enforcement Tracking and Investigation System (SENTRI) software each year from fiscal year 2009 through fiscal year 2013; however, they could not determine the exact amount of funds because FMCSA did not track information technology investments at the project level during those years. In addition, the agency obligated about $12 million from fiscal year 2014 through fiscal year 2016 in contractor costs to develop the Compliance, Safety, Accountability (CSA) component of SENTRI (see table 5). FMCSA officials told us that FMCSA made some progress in developing the CSA component of SENTRI as a result of these investments. For example, FMCSA coordinated with field staff to identify system requirements. The Federal Motor Carrier Safety Administration (FMCSA), in conjunction with the John A. Volpe National Transportation Systems Center (Volpe) has modified an existing effectiveness model to develop a statistical model, the Carrier Intervention Effectiveness Model (CIEM), which annually measures the effectiveness of interventions. In January 2015, FMCSA published its first report using the CIEM, which evaluated the effectiveness of compliance reviews, and interventions FMCSA conducted in fiscal years 2009, 2010, and 2011. We analyzed the CIEM and the January 2015 evaluation report, using accepted practices for designing program evaluations and internal staff expertise. Below, we identify methodological strengths and limitations of these efforts, in addition to potential methods FMCSA could use to improve the capabilities of its model to estimate program impacts. Specifically, we have identified strengths and limitations in four key aspects of FMCSA’s effectiveness model: (1) general design, (2) comparison group, (3) observation periods, and (4) statistical analysis and inference. According to FMCSA documentation, the CIEM is a statistical impact evaluation model that uses historical data to compare the safety improvement of carriers receiving FMCSA interventions (the treatment group) to carriers that do not (the comparison group). The January 2015 evaluation report assessed two intervention types that existed prior to the Compliance, Safety, Accountability (CSA) program, including compliance review investigations and Performance and Registration Information Systems Management letters, and four new CSA intervention types, including warning letters, offsite investigations, onsite focused investigations, and onsite comprehensive investigations. To estimate the impact of these interventions, the CIEM measures the difference between crash rates among carriers in the treatment group before and after receiving interventions, and then subtracts the difference in crash rates among carriers in the comparison group. The comparison group accounts for confounding factors (other than FMCSA interventions) that may affect safety performance during the post-intervention period, such as broad changes in weather or economic conditions. The model is designed to estimate the impact of interventions carried out in a single fiscal year and measures crash rates over a 12-month period following the first intervention a carrier receives in the fiscal year. The model estimates the combined impact of all interventions performed during 12-month periods, not the unique impact of each individual intervention type. We identified the following limitations: Lack of process evaluation: The CIEM is designed to evaluate impact, but does not include a process study. According to accepted practices for designing program evaluations, a program logic model or process evaluation that identifies the most important external influences on desired program outcomes is valuable in planning an impact evaluation that convincingly rules out most plausible alternative explanations for the observed results. Process evaluations clarify the program as implemented and specify which of its activities may be responsible for the observed outcomes. The CSA program’s logic model might include elements of the CSA program, such as FMCSA and state safety inspectors and information systems, roadside inspections, safety interventions, and crashes, injuries, deaths, and monetary losses prevented. Without an initial process evaluation, the impact evaluation cannot precisely identify what aspects of the program affect safety outcomes, or whether the estimated impacts reflect the program as designed. Without studying the program’s implementation and comparing its theoretical logic model to actual practices, it is uncertain whether impact estimates represent the effectiveness of the program’s activities as designed or the activities that program staff happened to have used in practice. The CIEM’s ability to accurately evaluate the impact of the CSA program could be improved by taking into account the program’s strategy and goals, and studying how the program is implemented. Exclusion of intervention types: The January 2015 evaluation report did not include all intervention types. According to the report, the evaluation did not assess cooperative safety plans or direct notices of violation and direct notices of claim, because data on these intervention types had inconsistent completeness and accuracy. FMCSA officials told us that the evaluation included carriers that received follow-on notices of violation or claim, but it did not separately estimate their effects. When notices of violation or claim follow an investigation, the model implicitly estimates their effects in the post-intervention crash rate. However, the evaluation could have excluded some carriers that received a notice as their first intervention in the modeled year but not in the same fiscal year as the investigation. Aggregation of intervention types: The CIEM implicitly estimates how combinations of interventions affect safety, not the effects of each individual intervention type. The model identifies only the first intervention that a carrier receives in the modeled year, and estimates the intervention’s effect on crash rates from that time of first contact. According to FMCSA officials, the model is designed to evaluate the effect of general FMCSA contact with carriers through interventions. Officials said that they tested some alternatives to using the first intervention in the fiscal year, including using a carrier’s most severe or last intervention in the fiscal year. Agency staff told us that they ultimately preferred to use the first intervention, because it represented the beginning of FMCSA’s influence on a carrier within the target time period. FMCSA did not seek to estimate the effectiveness of individual intervention types, because agency staff had concerns about the small amount of data available on specific intervention types and common intervention patterns when carriers receive multiple interventions over time. FMCSA officials stated that Volpe conducted preliminary analysis of the data used for the CIEM to determine potential sample sizes, but did not provide documentation on the sample sizes for each intervention type or common patterns of interventions. The challenges arising from small sample sizes could potentially be addressed by pooling together data from multiple years, rather than relying on intervention data from one fiscal year. Recommended practices of program and policy evaluation recommend that program managers attempt to separately evaluate multiple types of program activities that seek to achieve a common outcome. Such “comparative effectiveness (or efficiency)” evaluations give managers information on how various activities perform compared to each other. Variation in outcomes across settings or populations can be the result of variation in program operations, such as varying types or levels of enforcement. Further, variation in outcomes associated with features under program control, such as different agency activities, may identify opportunities for managers to take action to improve performance. To obtain this kind of information on comparative effectiveness, the treatment effects of interventions could be disaggregated into more nuanced categories than having at least one intervention. This approach could directly evaluate how each of several intervention types, or common intervention patterns, affect safety outcomes. Detailed performance information would better align with the design of the CSA program and give staff flexibility to choose from a range of intervention types that can address each carrier’s unique safety problems. With evidence of comparative effectiveness and efficiency, FMCSA would have better information on whether specific CSA interventions or combinations of interventions are more effective than others in certain circumstances. The challenges from small sample sizes on particular interventions could be addressed in several ways. Pooling together data from multiple years, rather than relying on intervention data from one fiscal year, might produce a sufficient sample for evaluating less commonly used interventions. A multi-year design might become viable as the CSA program continues to produce data over several years, though pooling data might increase the potential for unmeasured factors to influence safety outcomes. A process evaluation, as discussed above, could clarify how FMCSA field staff and state partners have implemented the program and could identify intervention types, or specific combinations of interventions, with sufficient data for analysis. Regression to the mean: The CIEM design does not fully account for the possibility that variation over time in the treatment group’s safety outcomes reflects regression to the mean. Regression to the mean is a statistical phenomenon in which, following an extreme measurement assumed to be due to random sources of variation, such as sampling error, subsequent measurements are likely to be closer to the average, or mean. Under the CSA program, FMCSA prioritizes carriers to receive interventions based on whether the carriers’ percentiles exceed pre- determined thresholds in any of seven behavioral analysis and safety improvement categories (BASIC). FMCSA’s decision to intervene with a carrier largely depends on BASIC percentiles, and officials use these percentiles, in addition to other criteria set out in guidance, to determine which type of intervention to apply. This is especially true for carriers that exceed the thresholds intended to identify the highest risk carriers, because FMCSA policy requires that those carriers receive onsite investigations. However, some carriers, especially those with few inspections or vehicles, may go above the intervention threshold in one measurement period due to anomalous events, but return below the threshold in the next measurement period due to factors unrelated to intervention. For example, a carrier may generally violate vehicle maintenance regulations at the industry average rate over the long- term. The carrier’s estimated violation rate in FMCSA data—and its related BASIC percentile—may vary around the long-term average in any particular period. The degree of variation could reflect the actual inconsistency of the carrier’s maintenance practices over time or sampling error in the estimation of its violation rate, related to its frequency of inspection (exposure to violating regulations). A large deviation in one period from the long-term average could exceed BASIC percentile thresholds and trigger additional FMCSA oversight, but the deviation may not reflect a real change in the carrier’s long- term maintenance behavior. In a subsequent period, the carrier’s BASIC percentile has a higher probability of returning to the long-term average than continuing at the extreme from the previous period, assuming the prior deviation was caused by random sources of variation, such as sampling error. The same process may apply to crash rates. According to recommended practices for evaluating the impact of a program, the evaluation must be carefully designed to rule out plausible alternative explanations for the results. The CIEM’s quasi- experiment design implicitly controls for differences across carriers that do not vary substantially over short time periods, which could include carrier management practices, leadership, and operating routes and procedures. The design controls for industry-wide changes over time that are constant across carriers, changes that could include weather and economic conditions. Lastly, the design explicitly controls for carrier size by stratifying the analysis by size groups. Although the size control may account for differences within carriers over time among the treatment carriers, the model includes few other controls that might specifically address this potential threat to valid causal inference. By not fully accounting for regression to the mean, the CIEM could be attributing changes in outcomes to CSA interventions, when those changes would have occurred on their own without intervention. Accordingly, there is some evidence that some carriers’ safety behavior improves without intervention. For example, the Independent Review Team found that, of the carriers FMCSA prioritized for intervention in 2013, nearly 33 percent had a BASIC percentile above threshold when assigned for intervention that was no longer above threshold at the time of the review, suggesting that carriers’ safety performance may improve naturally without intervention. As we have previously reported, carriers with fewer vehicles experience wide variance in crash and violation rates, which can make them especially prone to regression to the mean. An alternative design might compare carriers just above and just below the intervention-triggering threshold (at a given point in time). This “regression discontinuity” design would lend itself better to interventions triggered automatically when carriers exceed some threshold and would better reflect the nature of the CSA program, as recommended in accepted practices for evaluation design. Another alternative approach that could address the regression to the mean issue would be to match carriers based on variation over time in the safety outcomes prior to exceeding a BASIC percentile threshold. This design would fall into a general class of methods that include pre-treatment outcomes as an additional covariate. This would enhance the comparison group’s control for any deviations in the outcome over time prior to treatment and thereby ensure that treatment carriers would be matched to comparison carriers with similar outcome dynamics. The CIEM constructs comparison groups using carriers that did not receive any of the model’s interventions during the modeled, prior, or subsequent year. The model assigns carriers to one of several comparison groups, using the same measure of size—the number of vehicles—used to assign carriers to treatment groups. These separate comparison groups are intended to eliminate differences associated with carrier size from the model’s calculation of adjusted crash rates. We identified the following limitation: Limited control: Accepted practices for program evaluation would classify the CIEM as a “quasi-experimental” design for estimating impact. Quasi-experimental designs compare outcomes in a treatment group to outcomes in a comparison group formed using non-random assignment. Due to the lack of randomized assignment, the treatment and comparison groups may differ on other factors that affect the outcome. To compensate for this potential bias, evaluators generally ensure that such confounding variables are held constant in the construction of the comparison group or use other methods of adjustment. The CIEM explicitly holds constant one factor in the construction of its comparison group: carrier size. The model implicitly controls for all factors that vary between the treatment and control groups but remain constant over time, such as state or region of operation. In addition, the model implicitly controls for all factors that vary over time and affect the treatment and comparison groups equivalently, such as industry-wide effects due to weather or economic conditions. The CIEM’s “difference-in-difference” design provides these forms of implicit control, and thus is a key strength of the model. However, FMCSA might try to construct a more robust comparison group that explicitly controls for more than just carrier size. Since the design controls for variables that are fixed across carriers and vary in the same ways within groups over time, FMCSA might construct a comparison group that controls for change across multiple variables. If reliable data were available, then potential variables could include: multiple measures of size; state of registration, inspections, or violations; driver characteristics; pre-treatment safety outcomes; pre- treatment BASIC percentiles; and pre-treatment inspections and regulatory violations. Controlling for pre-treatment outcomes and BASIC percentiles would be especially desirable and would address the potential limitation of regression to the mean because the fluctuations due to sampling error would be controlled by design. Data availability and reliability may limit FMCSA’s ability to include additional control variables to construct comparison groups. For example, the 2015 evaluation report notes that data on vehicle miles traveled—a measure of carrier exposure to crashes—were less reliable than vehicle count data because, according to officials, they were sometimes incomplete or inconsistent across carriers. Similarly, officials told us that FMCSA considered using carrier operation type to construct comparison groups, but ultimately did not because some carriers reported multiple operation types or changed their operation types over time. Given the central importance of the CSA program to FMCSA’s enforcement efforts, the agency would benefit from improving or expanding data collection to support a more robust model. The CIEM defines different observation periods for the treatment and control groups. For the treatment group, the CIEM uses the date of the first intervention in the modeled fiscal year as the demarcation between a 12-month pre-intervention period and a 12-month post-intervention period. Pre- and post-intervention crash rates are measured for these 12- month periods. For the comparison group, in contrast, the CIEM defines 18-month periods preceding and following the midpoint of the modeled fiscal year to measure pre- and post-intervention crash rates. (This is because comparison carriers do not have an intervention date to define the pre- and post-intervention periods and measure crash rates.) The evaluation report noted that these longer 18-month periods ensure that the comparison group’s crash rates cover the entire treatment group time frame. To adjust for the 50 percent longer observation period for carriers in the comparison group, the evaluation divided crash rates for those carriers by 1.5 to yield annual crash rates. We identified the following limitation: Inconsistent time periods: According to accepted practices for program impact evaluation, a design must confidently rule out non- program influences that could cause changes in outcomes to occur. The CIEM does not completely account for factors that might have affected crash rates for both treatment and comparison groups, because the lack of overlap between the measurement time periods does not hold constant factors unique to the season or period of measurement. For example, as FMCSA officials noted to us, crash rates are known to depend on seasonal and periodic changes in weather, and the model’s lack of overlap in measurement time periods meant that treatment and comparison groups were subject to different seasonal and periodic conditions. Other potential confounding variables include seasonal or periodic variation in economic demand and state enforcement resources. The design might use measurement periods that vary as a function of each observed intervention’s timing and characteristics. For example, the design might construct a custom comparison group for each member of the treatment group, selecting multiple comparison carriers using size and other potential confounding variables. A matching design in which each treatment carrier is compared to one or more control carriers would allow identical observation periods while still producing an impact estimate for interventions applied during a single fiscal year. The CIEM’s use of statistical inference is appropriate to quantify the uncertainty of its impact estimates. Carrier behavior and safety outcomes are consistent with a partially random data generation process. In this context, statistical inference estimates the sampling variability of the data over multiple hypothetical realizations of the same process. Applying inferential statistical methods appropriately reflects the potential for the observational regulatory, intervention, and safety data to vary partially at random. A critical estimate in the CIEM is the net crash rate change for the treatment group. The model defines this quantity as the difference between the treatment group’s pre- and post-intervention crash rates, after subtracting the crash rate change in the comparison group. The model then tests whether the net change differs from zero at the 0.05 statistical significance level. The model excludes insignificant findings from its subsequent estimates of total safety benefits calculated. The model estimates safety benefits by transforming the estimated net change in crash rates due to interventions into an estimate of total crashes avoided, using the treatment group’s pre-intervention crash rate per vehicle and post-intervention vehicle counts. The model uses historical crash severity data to further estimate injuries prevented and lives saved associated with each prevented crash. The model extrapolates these safety benefits—crashes avoided, injuries prevented, and lives saved—to carriers that received interventions, but were excluded from the treatment group due to missing or outlier crash or vehicle count data, and, according to officials, intrastate carriers that were excluded from the treatment group in the January 2015 version of the model. The January 2015 evaluation notes that FMCSA assumed that these carriers will exhibit the same response to intervention as the carriers included in the model. Accordingly, the model adds the estimated safety benefits for carriers included in the model to those for carriers with outliers and missing data. The sum determines the aggregate estimated safety benefits. According to the 2015 evaluation report, FMCSA extrapolated safety benefits to the following number of carriers that received an intervention but were not included in the model: 9,567 carriers in fiscal year 2009; 9,929 carriers in fiscal year 2010, and; 14,816 carriers in fiscal year 2011. We identified the following limitations: Multiple hypothesis tests: The model uses multiple statistical hypothesis tests to calculate total safety benefits. The CIEM estimates impact on crash rates within four strata of treatment and comparison groups defined by carrier size. If the net change in crash rates within each stratum is statistically distinguishable from zero at the 0.05 confidence level, the model uses the results to estimate total safety benefits by summing the estimated benefits across groups. Statistically insignificant results in any stratum provide zero safety benefits by assumption. In this sense, the model’s estimate of total safety benefit reflects the results of 4 separate hypothesis tests, each at the 0.05 confidence level. = 1 – (1 – α)K. When K = 4 and α = 0.05, α = 0.18. See Hayes, 450. methods of hypothesis testing typically recommend adjusting the confidence level of each individual test, such as by applying Bonferroni adjustments, so that the group-wise error probability matches the analyst’s intended risk level for all planned tests. These adjustments typically produce lower alpha values for each individual test. Multiple hypothesis testing methods would be appropriate for the CIEM, given that it ultimately seeks to estimate total safety benefits as a function of multiple hypothesis tests. Without making these adjustments, the CIEM’s estimates of total safety benefits may have more risk of error than FMCSA intends to accept because the model does not conduct a joint test. That is, the probability that at least one group’s safety benefits equals zero may exceed the 0.05 confidence level that FMCSA accepts in each individual test. An alternative approach might calculate the confidence interval of the summed safety benefit estimate and test whether it equals zero, consistent with the discussion below. Confidence intervals of estimated impacts: The CIEM uses inferential statistical methods to test the hypothesis that the impact for each pair of treatment and control groups equals zero. However, the CIEM does not conduct statistical inference, such as estimating confidence intervals, when analyzing the net change in crash rates and associated total safety benefits. This approach is inconsistent, given that the model’s hypothesis test for a non-zero net change in crash rates implies that the quantity is a random variable with a sampling distribution and confidence interval. The model’s authors agree with this implication, suggesting that the test is equivalent to a “95 percent confidence interval that does not include zero.” Total safety benefits must also have a confidence interval, given that it is a function of the change in net crash rates. Nevertheless, the CIEM reports only point estimates of safety benefits, without conveying the statistical uncertainty that the model’s assumptions imply. Accordingly, the CIEM might estimate and report confidence intervals for its estimates of crash rate impact and safety benefits, in order to make the statistical inference consistent and quantify the uncertainty of its estimates. The current hypothesis testing approach may produce a point estimate for safety benefits that appears more precise than the underlying confidence interval would support. In addition to the individual named above, H. Brandon Haller (Assistant Director), William Colwell (Analyst in Charge), Katherine Blair, Melissa Bodeau, Russell Burnett, David Hooper, Benjamin Licht, Grant Mallie, Ifunanya Nwokedi, Malika Rice, Sandra Sokol, Niti Tandon, and Jeff Tessin made key contributions to this report.
As part of its mission to reduce crashes and fatalities involving large commercial trucks and buses, FMCSA seeks to use a data-driven approach to identify the highest-risk motor carriers and address safety problems by applying a range of eight CSA program enforcement tools, called interventions, ranging from warning letters to placing carriers out of service. A provision in a Senate report requires GAO to periodically assess FMCSA's implementation of the CSA program. This report examines the extent to which FMCSA has (1) implemented CSA interventions, (2) evaluated the effectiveness and efficiency of CSA interventions, and (3) monitored progress toward achieving outcomes. GAO reviewed FMCSA data and documentation on all eight CSA intervention types from fiscal years 2010–2015, including FMCSA's strategic planning documents, guidance, and program evaluations. GAO interviewed industry stakeholders and FMCSA officials in headquarters, in each of FMCSA's service centers, and in eight states selected for their participation in FMCSA's CSA pilot test, location, and program size, among other factors. In July 2010, the Federal Motor Carrier Safety Administration (FMCSA) chose to delay nationwide implementation of two of the eight interventions that FMCSA uses to address motor carrier safety concerns under its Compliance, Safety, Accountability (CSA) program. This delay is linked to continuing delays in developing software needed to support the two interventions, offsite investigations and the use of cooperative safety plans. The software under development is intended to help FMCSA overcome some of the information challenges it faces due to its reliance on legacy information systems. FMCSA estimates that the software development project will be completed by April 2017. FMCSA has conducted evaluations of the effectiveness and efficiency outcomes it established for the CSA program. However, GAO identified several limitations in FMCSA's approaches that impact the usefulness of the evaluations: Intervention effectiveness: FMCSA has developed a statistical model to annually evaluate the combined effectiveness of interventions. Although the model has some key strengths, such as accounting for a broad range of external factors, GAO identified a number of design and methodology limitations that reduce the usefulness of its results. For example, the model does not include an assessment of individual intervention types. Without this type of specific information, FMCSA is hampered in its ability to identify the circumstances under which different types of interventions are effective. Similarly, these types of limitations affect FMCSA's ability to accurately draw conclusions about intervention effectiveness across all intervention types. Intervention efficiency: To assess the efficiency of CSA interventions, FMCSA has relied on a study that it sponsored and that was published in 2011. This study estimated the average cost of conducting interventions in four states from October 2008 through May 2009. However, FMCSA has not taken steps to update its cost estimates for interventions since the 2011 evaluation, despite changes since that time in the resources needed to conduct CSA interventions; nor has it taken steps to develop additional information that is representative of the costs in other states. Without current cost estimates that are representative of all states, FMCSA cannot appropriately assess the efficiency of its interventions. FMCSA has taken some actions to improve the effectiveness and efficiency of CSA interventions, but lacks measures to monitor progress. In April 2014, FMCSA established a working group to assess CSA interventions and make recommendations for improvement. As of April 2016, the group had made 20 recommendations, of which 12 had been implemented. However, GAO found that while FMCSA has established some performance measures for its effectiveness outcome that are appropriate, it has not established similar measures for its efficiency outcome. FMCSA headquarters officials told GAO that effectiveness and efficiency are complementary outcomes that FMCSA strives to balance. Without a complete set of measures for both outcomes, FMCSA lacks benchmarks needed to regularly measure progress to achieve these outcomes. GAO recommends that FMCSA evaluate the effectiveness of individual intervention types, update cost estimates so that they are current and representative of all states, and establish complete performance measures. The Department of Transportation concurred with all of GAO's recommendations.
Congress established FHA in 1934 under the National Housing Act (P.L. 73- 479) to broaden homeownership, protect and sustain lending institutions, and stimulate employment in the building industry. FHA’s single-family programs insure private lenders against losses from borrower defaults on mortgages that meet FHA’s criteria for properties with one to four housing units. FHA historically has played a particularly large role among minority, lower-income, and first-time homebuyers. In 2006, 79 percent of FHA- insured home purchase loans went to first-time homebuyers, 31 percent of whom were minorities. In recent years, FHA’s volume of business has fallen sharply. More specifically, the number of single-family loans that FHA insured fell from about 1.3 million in 2002 to 426,000 in 2006. To help FHA adapt to recent trends in the mortgage market, in 2006 HUD submitted a legislative proposal to Congress that included changes that would adjust loan limits for the single-family mortgage insurance program, eliminate the requirement for a minimum down payment, and provide greater flexibility to FHA to set insurance premiums based on risk factors. According to HUD, a zero-down-payment mortgage product would provide FHA with a better way to serve families in need of down-payment assistance. As previously noted, some nonprofits that provide down-payment assistance receive contributions from property sellers. When a homebuyer receives down-payment assistance from one of these organizations, the organization requires the property seller to make a financial payment to their organization. These nonprofits are commonly called “seller-funded” down-payment assistance providers. A 1998 memorandum from HUD’s Office of the General Counsel found that funds from a seller-funded nonprofit were not in conflict with FHA’s guidelines prohibiting down- payment assistance from sellers. FHA does not approve down-payment assistance programs administered by nonprofits. Instead, lenders are responsible for assuring that down-payment assistance from a nonprofit meets FHA requirements. Loans with down-payment assistance have come to constitute a substantial portion of FHA’s portfolio in recent years, particularly as the number of loans without such assistance has fallen sharply. For example, from 2000 to 2004, the total proportion of FHA-insured single-family purchase loans that had a loan-to-value (LTV) ratio greater than 95 percent and that also involved down-payment assistance from any source grew from 35 to nearly 50 percent. Assistance from nonprofit organizations, about 93 percent of which were funded by sellers, accounted for an increasing proportion of this assistance. Approximately 6 percent of FHA- insured loans received down-payment assistance from nonprofit organizations in 2000, but, by 2004 this figure had grown to about 30 percent. FHA data for 2005 and 2006 indicate that the percentages of loans with down-payment assistance from any source and from seller-funded nonprofits remained at roughly 2004 levels. Growth in the number of seller-funded nonprofit providers and the greater acceptance of this type of assistance have contributed to the increase in the use of down-payment assistance. According to industry professionals, relatives have traditionally provided such assistance, but in the past decade or so other sources have emerged, including not only seller-funded nonprofit organizations but also government agencies and employers. The mortgage industry has responded by developing practices to administer this type of assistance, for instance, FHA policies require gift letters and documentation of the transfer of funds. Lenders also reported that seller- funded down-payment assistance providers have developed practices accepted by FHA and lenders. For example, seller-funded programs have standardized gift letter and contract addendum forms for documenting both the transfer of down-payment assistance funds to the homebuyer and the financial contribution from the property seller to the nonprofit organization. As a result, for FHA-insured loans, lenders are increasingly aware of and willing to accept down-payment assistance, including from seller-funded nonprofits. We found that states that have higher-than-average percentages of FHA- insured loans with nonprofit down-payment assistance, primarily from seller-funded programs, tended to be states with lower-than-average house price appreciation rates. From May 2004 to April 2005, 35 percent of all FHA-insured purchase loans nationwide involved down-payment assistance from a nonprofit organization, and 15 states had percentages that were higher than this nationwide average. Fourteen of these 15 states also had house price appreciation rates that were below the median rate for all states. In addition, the eight states with the lowest house price appreciation rates in the nation all had higher-than-average percentages of nonprofit down-payment assistance. Generally, states with high proportions of FHA-insured loans with nonprofit down-payment assistance were concentrated in the Southwest, Southeast, and Midwest. The presence of down-payment assistance from seller-funded nonprofits can alter the structure of purchase transactions. When buyers receive assistance from sources other than seller-funded nonprofits, the home purchase takes place like any other purchase transaction—buyers use the funds to pay part of the house price, the closing costs, or both, reducing the mortgage by the amount they pay and creating “instant equity.” However, seller-funded down-payment assistance programs typically require property sellers to make a financial contribution and pay a service fee after the closing, creating an indirect funding stream from property sellers to homebuyers that does not exist in a typical transaction (see fig. 1). Our analysis indicated, and mortgage industry participants we spoke with reported, that property sellers often raised the sales price of their properties in order to recover the contribution to the seller-funded nonprofit that provided the down-payment assistance. Marketing materials from seller-funded nonprofits often emphasize that property sellers using these down-payment assistance programs earn a higher net profit than property sellers who do not. These materials show sellers receiving a higher sales price that more than compensates for the fee typically paid to the down-payment assistance provider. Several mortgage industry participants we interviewed noted that when homebuyers obtained down- payment assistance from seller-funded nonprofits, property sellers increased their sales prices to recover their payments to the nonprofits providing the assistance. An earlier study by a HUD contractor corroborates the existence of this practice. Some mortgage industry participants we met with told us that they viewed down-payment assistance from seller-funded nonprofits as a seller inducement. However, FHA has not viewed such assistance as a seller inducement and therefore does not subject this assistance to the limits that it otherwise places on contributions from sellers. Some mortgage industry participants told us that homes purchased with down-payment assistance from seller-funded nonprofits might be appraised for higher values than they would be without this assistance. Appraisers we spoke with said that lenders, realtors, and sellers sometimes pressured them to “bring in the value” in order to complete the sale. The HUD contractor study corroborates the existence of these pressures. Our analysis of a national sample of FHA-insured loans endorsed in 2000, 2001, and 2002 suggested that homes with seller-funded assistance were appraised and sold for about 3 percent more than comparable homes without such assistance. Additionally, our analysis of more recent loans—a sample of FHA-insured loans settled in March 2005—indicated that homes sold with nonprofit assistance were appraised and sold for about 2 percentage points more than comparable homes without nonprofit assistance. We found that FHA-insured loans with down-payment assistance do not perform as well as loans without it. As part of our evaluation, we analyzed loan performance by source of down-payment assistance, controlling for the maximum age of the loan, as of June 30, 2005. We used two samples of FHA-insured purchase loans from 2000, 2001, and 2002—a national sample and a sample from three MSAs with high rates of down-payment assistance. We grouped the loans into the following three categories: loans with assistance from seller-funded nonprofit organizations, loans with assistance from nonseller-funded sources, and loans without assistance. As shown in figure 2, in both samples and in each year, loans with down- payment assistance from seller-funded nonprofit organizations had the highest rates of delinquency and insurance claims, and loans without assistance the lowest. Specifically, between 22 and 28 percent of loans with seller-funded assistance had experienced a 90-day delinquency, compared with 11 to 16 percent of loans with assistance from other sources and 8 to 12 percent of loans without assistance. The claim rates for loans with seller-funded assistance ranged from 6 to 18 percent, for loans with other sources of assistance from 5 to 10 percent, and for loans without assistance from 3 to 6 percent. In addition, we analyzed loan performance by source of down-payment assistance holding other variables constant. Here we found that FHA- insured loans with down-payment assistance had higher delinquency and claim rates than similar loans without such assistance (see fig. 3). The results from the national sample indicated that assistance from a seller- funded nonprofit raised the probability that the loan had gone to claim by 76 percent relative to similar loans with no assistance. Differences in the MSA sample were even larger; the probability that loans with seller-funded nonprofit assistance would go to claim was 166 percent higher than it was for comparable loans without assistance. Similarly, results from the national sample showed that down-payment assistance from a seller- funded nonprofit raised the probability of delinquency by 93 percent compared with the probability of delinquency in comparable loans without assistance. For the MSA sample, this figure was 110 percent. The weaker performance of loans with seller-funded down-payment assistance may be explained, in part, by the higher sales prices of homes bought with this assistance and the homebuyer having less equity in the transaction. The higher sales price that often results from a transaction involving seller-funded down-payment assistance can have the perverse effect of denying buyers any equity in their properties and creating higher effective LTV ratios. FHA has requirements which have the effect of ensuring that FHA homebuyers obtain a certain amount of “instant equity” at closing, but seller-funded down-payment assistance effectively undercuts these requirements. That is, when the sales price represents the fair market value of the house, and the homebuyer contributes 3 percent of the sales price at the closing, the LTV ratio is less than 100 percent. But when a seller raises the sales price of a property to accommodate a contribution to a nonprofit that provides down-payment assistance to the buyer, the buyer’s mortgage may represent 100 percent or more of the property’s true market value. Our prior analysis has found that, controlling for other factors, high LTV ratios lead to increased claims. The adverse performance of loans with seller-funded down-payment assistance has had negative consequences for FHA. FHA has estimated that its single-family mortgage insurance program would require a subsidy—that is, appropriations—in 2008 in the absence of program changes. According to FHA, the growing share of FHA-insured purchase loans with seller-funded assistance has contributed to FHA’s worsening financial performance. Our 2005 report made recommendations designed to better manage the risks of loans with down-payment assistance generally and from seller- funded nonprofits specifically. We recommended that FHA consider risk mitigation techniques such as including down-payment assistance as a factor when underwriting loans. We also recommended that FHA take additional steps to mitigate the risk associated with loans with seller- funded down-payment assistance, such as treating such assistance as a seller inducement and therefore subject to the prohibition against using seller contributions to meet the 3 percent borrower contribution requirement. Consistent with our recommendations, FHA is testing additional predictive variables, including source of the down payment, for inclusion in its mortgage scorecard (an automated tool that evaluates the default risk of borrowers). Additionally, in May 2007 HUD issued a proposed rule that would prohibit the use of seller-funded down-payment assistance in conjunction with FHA-insured loans. FHA also has been anticipating a reduction in the number of loans with down-payment assistance from seller-funded nonprofit organizations as a result of actions taken by the Internal Revenue Service (IRS). Citing concerns about seller-funded nonprofits raised by our report and the 2005 HUD contractor study, IRS issued a ruling in May 2006 stating that these organizations do not qualify as tax-exempt charities, thereby making loans with such assistance ineligible for FHA insurance. In a press announcement of the ruling, IRS stated that funneling down-payment assistance from sellers to buyers through “self-serving, circular-financing arrangements” is inconsistent with operation as a charitable organization. According to FHA, as of June 2007, IRS had rescinded the charitable status of three of the 185 organizations that IRS is examining. Madam Chairwoman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact William B. Shear at (202) 512-8678 or shearw@gao.gov. Individuals making key contributions to this testimony included Steve Westley (Assistant Director), Emily Chalmers, Chris Krzeminski, and Andy Pauline. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Federal Housing Administration (FHA) differs from other key mortgage industry participants in that it allows borrowers to obtain down-payment assistance from nonprofit organizations (nonprofits) that operate programs supported partly by property sellers. Research has raised concerns about how this type of assistance affects home purchase transactions. To assist Congress in considering issues related to down-payment assistance, this testimony provides information from GAO's November 2005 report, Mortgage Financing: Additional Action Needed to Manage Risks of FHA-Insured Loans with Down Payment Assistance (GAO-06-24). Specifically, this testimony discusses (1) trends in the use of down-payment assistance with FHA-insured loans, (2) the impact that the presence of such assistance has on purchase transactions and house prices, and (3) the influence of such assistance on loan performance. The proportion of FHA-insured purchase loans that were financed in part by down-payment assistance increased from 35 percent to nearly 50 percent from 2000 through 2004. Assistance from nonprofit organizations that received at least part of their funding from property sellers accounted for much of this increase, growing from about 6 percent of FHA-insured purchase loans in 2000 to approximately 30 percent in 2004. More recent data indicate that in 2005 and 2006, the percentages of FHA-insured loans with down-payment assistance from all sources and from seller-funded nonprofits were roughly equivalent to 2004 levels. Assistance from seller-funded nonprofits alters the structure of the purchase transaction in important ways. First, because many seller-funded nonprofits require property sellers to make a payment to their organization, assistance from these nonprofits creates an indirect funding stream from property sellers to homebuyers. Second, GAO analysis indicated that FHA-insured homes bought with seller-funded nonprofit assistance were appraised at and sold for about 2 to 3 percent more than comparable homes bought without such assistance. Regardless of the source of assistance and holding other variables constant, GAO analysis indicated that FHA-insured loans with down-payment assistance have higher delinquency and insurance claim rates than do similar loans without such assistance. Furthermore, loans with assistance from seller-funded nonprofits do not perform as well as loans with assistance from other sources. This difference may be explained, in part, by the higher sales prices of comparable homes bought with seller-funded assistance and the homebuyers having less equity in the transaction.
Medicare, which is administered by CMS—an agency within HHS—is the federal program that helps pay for a variety of health care services and items on behalf of about 42 million elderly and certain disabled beneficiaries. Most Medicare beneficiaries participate in Part B, which helps pay for certain physician, outpatient hospital, laboratory, and other services; DMEPOS (such as oxygen, wheelchairs, hospital beds, walkers, orthotics, prosthetics, and surgical dressings); and certain outpatient drugs. Medicare pays 80 percent of the cost of services and items covered under Part B, and the beneficiary pays the balance. Beneficiaries typically obtain DMEPOS items from suppliers, who submit claims to Medicare on the beneficiaries’ behalf. Suppliers include medical equipment retail establishments, and also can include outpatient providers, such as physicians and physical therapists. DMEPOS suppliers are required by CMS to meet certain standards before they are authorized to bill Medicare. These standards are intended to ensure that suppliers engage in legitimate business practices and are licensed and qualified to provide DMEPOS items and services in the states in which they operate. CMS contracts with the National Supplier Clearinghouse (NSC) to screen potential suppliers and enroll those that comply with CMS standards into the Medicare program. In a previous report, we found that NSC’s efforts to verify compliance with the standards were insufficient to ensure that only legitimate and qualified suppliers could bill Medicare. DMEPOS claims are handled by CMS contractors who are responsible for processing and paying claims submitted to Medicare. To do this, they ensure that all necessary information is included on a claim. Claims processing contractors are responsible for paying DMEPOS claims and recouping any payments that have been made in error. Prior to January 2006, CMS contracted with four DMERCs to handle DMEPOS claims processing activities. Each DMERC was assigned to one of four geographic regions—Region A, B, C, or D—and was responsible for processing the DMEPOS claims of Medicare beneficiaries residing within its region. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 included provisions that required CMS to implement competitive procedures to replace DMERCs with DME MACs. In January 2006, CMS competitively selected four DME MACs from a pool of applicants and began to transition DMEPOS claims administration activities from the DMERCs to DME MACs. In Regions A and B, the transition of these claims processing activities was completed by July 1, 2006, but bid protests against the selection of the Region C and D DME MACs delayed transitions in these regions. As a result, claims processing activities did not transition in Region D until September 30, 2006, and, as of January 2007, the DMERC in Region C was continuing to process claims. DMEPOS program integrity activities are designed to protect the Medicare program from improper payments. These program integrity activities include medical reviews of claims and benefit integrity efforts. Medical review is the examination of information on a DMEPOS claim, as well as the examination of any supporting documentation associated with the claim, to determine if a beneficiary’s medical condition meets Medicare’s coverage criteria. Medical review can also include data analyses of submitted and paid DMEPOS claims to identify billing patterns that may be associated with improper Medicare payments. If medical review reveals that an overpayment was made to a supplier, the claims processing contractor that paid the claim is responsible for collecting the overpayment from the supplier. Medical review findings also help CMS contractors determine what instruction they may need to provide to DMEPOS suppliers to inform them about Medicare program rules and proper DMEPOS billing. Medical review often results from contractors’ use of edits to identify claims that require scrutiny, and it can be performed before or after payment. Benefit integrity is the investigation of suspected fraud and the referral of suppliers to law enforcement for further investigation and prosecution. In addition, benefit integrity activities include data analysis of DMEPOS claims to identify improper billing that may indicate fraud. Prior to March 1, 2006, all medical review and benefit integrity activities within Regions B, C, and D were conducted by each region’s DMERC. In Region A, these activities were conducted by a PSC. As of March 1, 2006, the PSC in Region A also became responsible for conducting the medical review and benefit integrity activities for Region B. In Regions C and D, CMS selected two other PSCs—one for each region—to conduct the medical review and benefit integrity activities in each respective region. The PSC for each region is responsible for partnering with its region’s claims processing contractor when conducting medical review and benefit integrity activities. By March 1, 2006, the transition of medical review and benefit integrity activities from the DMERCs to the PSCs was completed. Table 1 provides a summary of DMEPOS claims processing and program integrity activities and the associated contractor types for these activities, as of January 2007. In addition to the contractors mentioned above, the SADMERC performs analyses of national data on paid Medicare DMEPOS claims. The SADMERC develops reports for CMS, CMS contractors, and law enforcement to identify trends in payment and potential fraud. It often focuses its analyses by examining a particular DMEPOS item, supplier, or referring physician, or by analyzing claims in a specific region or other geographic area. Under CMS’s direction, its contractors conduct program integrity activities, such as developing the automated prepayment controls known as edits that check claims before payment, and performing benefit integrity tasks. However, the contractors’ edits fell short in preventing improper payments from being made. Specifically, the contractors did not have edits that flagged atypical billing or consistently identified claims that were medically improbable, and the contractors also did not routinely share their successful edits with the other contractors. Further, as a key aspect of the benefit integrity activities, contractors provided case referrals about suppliers to help law enforcement agencies investigate and prosecute Medicare fraud. However, law enforcement officials stated that case referrals would be more useful if they were based on more recent information. PSCs in each region analyze data on claims that have been paid in order to identify potentially improper ones, which can be evidenced by atypical billing patterns—such as a rapid growth in payments for a particular DMEPOS item or provider. They also use results from CMS’s annual study of improperly paid claims to identify items at risk of improper payment in their respective regions. The PSCs decide on their approach to addressing potentially improper claims based on the level of their resources and the scope of the identified problems in their regions. Each PSC’s approach is detailed in its annual “medical review strategy,” submitted to CMS for approval. Due to the specific problems identified in each region, the PSCs’ medical review strategies can differ. As part of its strategy, each PSC is required to design a comprehensive plan detailing how it will address each problem it identifies, and reduce the rate of errors in claims payment. PSCs continuously update their strategy as improper payment problems are resolved and new ones are discovered. To prevent and minimize improper payments for DMEPOS, PSCs rely on automated prepayment controls—called edits. Edits automatically check claims before payment to make sure that they appear to be valid. PSCs are responsible for developing and implementing a specific type of edit, called a medical review edit. Medical review edits specifically allow a PSC to check that an item on a claim appears medically necessary for the beneficiary under Medicare’s coverage criteria. Medical review edits can either lead to the automatic denial of an improper claim, or subject a claim to a manual review. For example, a medical review edit could be established to automatically deny any claim submitted for specific items for a beneficiary if it had been determined that the beneficiary’s Medicare number was used repeatedly on claims from different suppliers for DMEPOS items that the beneficiary did not need. Alternatively, medical review edits can flag claims for manual medical review before payment, which requires that a PSC reviewer examine data on the claim, along with any related supporting documentation. The reviewer determines whether to allow the claim to continue through the payment process, obtain more documentation, or deny the claim. We identified three gaps in medical review edits that could lead to improper payments. First, DMERCs and the Region A PSC generally did not have medical review edits in place to identify claims associated with atypical billing patterns. Such billing patterns involve rapid or dramatic increases in the billed amounts of claims. Atypical billing patterns can involve legitimate claims, when, for example, CMS expands the coverage rules for an item or service. However, atypical billing patterns have often been associated with improper claims and payments. Atypical billing patterns can appear with claims (1) submitted by a particular supplier, (2) covering a particular DMEPOS item, (3) based on referrals from the same prescribing physician, (4) submitted on behalf of a particular beneficiary, or (5) associated with atypical billing that is clustered in a particular geographic area. The DMERC and PSC officials we interviewed told us that they did not use medical review edits that would routinely flag claims that had reached predesignated thresholds—such as ones that would signal an unusually large increase in payment to a supplier. One contractor indicated that, depending on the threshold set, introducing these types of edits could allow too many claims to be flagged for medical review. In the absence of threshold edits to avoid paying improper claims associated with atypical billing patterns, the DMERCs paid claims that represented large increases over historical billing amounts submitted. For example, we found that from the first quarter of 2003 through the first quarter of 2005, 225 suppliers increased their billing to Medicare by $500,000 and 50 percent from at least one 3-month period to the next. At least 38 of the 225 suppliers were under criminal investigation during 2004. In November 2004, the U.S. government won a default civil judgment of $366 million against 16 of these suppliers. These suppliers had billed for services not rendered and committed other offenses, and they had been paid almost $40 million from January 2003 through September 2004. As of December 2006, DOJ had collected about $738,000 from suppliers involved in the case. HHS OIG investigators in Miami told us that it was not uncommon for fraudulent suppliers to close up their businesses at the first sign of an investigation or to quickly move their Medicare payments out of their accounts in ways that are difficult to track. By the time law enforcement can act against fraudulent suppliers, much of the money gained from Medicare has disappeared and cannot be recouped. We found that contractors paid claims that were medically improbable because they did not have edits to flag them. Such claims represent items unlikely to be prescribed, or unlikely to be prescribed in the quantity billed, for a beneficiary as part of routine quality care. In conjunction with the SADMERC, we identified three instances where medically improbable claims were routinely being paid by Medicare for more than a year. For example, if a Medicare beneficiary has a foot amputated, that person would usually need a prosthetic foot for that limb. As a result, the beneficiary should not also need a brace for a limb that no longer exists. From October 2002 through March 2005, Medicare paid over $2 million for beneficiaries’ braces after the program had paid for prosthetics within the last year for the same beneficiaries’ legs, feet or ankles. (See table 2 for two other examples.) A SADMERC official told us that the contractors could develop edits for medically improbable circumstances that could avoid improper payments. In recognition of the value of edits to detect medically improbable claims, CMS has begun a process to have its contractors implement such edits. In January 2007, the agency plans to introduce 19 edits for DMEPOS items, albeit not for the items described in table 2. These 19 edits will deny claims for DMEPOS items if a medically improbable quantity of the item is listed on the claim for a single beneficiary in one day. The agency plans to introduce additional edits for more DMEPOS items and other services later in 2007. Finally, CMS does not require its contractors to share information on their edits with contractors in other regions or adopt edits that have been effective in other contractors’ regions. CMS requires each of its contractors to develop and maintain its own edits. Contractors are free to adopt or eliminate edits at their discretion based on such factors as the effectiveness of an edit in reducing improper payments, the added cost of implementing and maintaining an edit, and the presence or absence of other, more costly, improper payments. CMS officials we spoke with told us that CMS expects contractors to add edits at their own discretion, based on their resources. CMS maintains a database through which contractors provide information to the agency on the effectiveness of their edits. At present, contractors do not have access to other contractors’ information in the database. Our analysis found that if contractors were to adopt edits that have been effective in other contractors’ regions, they could likely reduce their improper payments. For example, in 2005, the DMERC in Region C had an edit in place to restrict payment for the same or similar types of home-use hospital beds to one item per month per beneficiary, by automatically denying any additional claims submitted for these items. Our analysis identified a potential savings within Region C of $50.7 million from January 1, 2003, through June 30, 2005. Based on the claims submitted over this time period in the other three regions, we found that this edit could have generated an additional savings of up to $70.6 million if it had been implemented in the other three regions. Overall, our analysis of a sample of seven edits—selected from a list of automated edits that was provided in response to our request and included edits estimated to be the most effective by the contractors that developed them—found that each contractor had edits that could have denied up to an additional $74.1 million in claims from January 2003 through June 2005, had all seven edits been used by each contractor. Under their benefit integrity responsibilities, PSCs are expected to identify and investigate cases of suspected fraud within their regions and refer these cases to law enforcement for further investigation and prosecution. A PSC’s investigation can include examining medical and other records associated with a particular claim or claims, questioning beneficiaries about whether they received items that were billed, and conducting site visits to suppliers’ facilities. PSCs also use analysis of claims data to look for atypical billing patterns and other factors that may indicate fraud, such as the number of complaints against, or prior investigations of, a supplier. PSCs are required by CMS to refer cases of suspected fraud to the HHS OIG for further investigation. PSCs are also required to support law enforcement’s investigation and prosecution of fraud by providing supplier and beneficiary information and other relevant case-related data, as requested by law enforcement entities. Along with these tasks, the PSC statements of work outline other required activities, including participating in regular case-related contact with law enforcement, coordinating and participating in antifraud conferences and related gatherings, updating a national database maintained by CMS that tracks Medicare fraud, and providing educational programs for law enforcement on contractor operations and Medicare issues. Prior to the transfer of benefit integrity activities to PSCs on March 1, 2006, DMERCs were responsible for these activities in three of the four regions. In the fourth region—Region A—a PSC was responsible for these activities prior to this date. Our analysis of CMS contractor benefit integrity performance evaluations from 2001 through 2005—the most recent years for which these evaluations were available—generally found few serious problems. According to these evaluations, the PSC in Region A and the DMERCs in Regions B and C met most or all of CMS’s benefit integrity requirements in all years, with any problems identified by these evaluations labeled as “minor.” The DMERC in Region D—which no longer holds this contract— met all benefit integrity requirements in two recent evaluation periods (which covered October 1, 2003, through May 31, 2004, and October 1, 2004, through April 15, 2005). However, in three earlier evaluation periods preceding October 1, 2003, CMS found “major” problems relating to the DMERC’s case referral activities, such as less than timely development of cases and lack of documentation to support case files. Despite the PSC’s and DMERCs’ positive evaluations by CMS in recent years, law enforcement officials we spoke with stated that the contractors could have done more to support law enforcement activities. For example, law enforcement officials we interviewed in Miami and Southern California told us that, while they were satisfied with the quality of information presented in the case referrals, the case files often pertained to fraud that had occurred too far in the past to be effectively investigated by the time the referral was received. The Los Angeles FBI office as well as the U.S. Attorney’s office responsible for prosecuting Medicare fraud in the Los Angeles area (Region D) told us that the typical case referral submitted to the office for prosecution in 2005 related to suspect suppliers whose peak billing activity occurred during 2003. The Miami FBI office and the U.S. Attorney’s office responsible for prosecuting Medicare fraud in the Miami area expressed similar concerns on the timeliness of case referrals. Law enforcement officials explained that when case referrals are made after a supplier is no longer in business, investigating and prosecuting the suspected fraud is difficult or even impossible because law enforcement may not be able to locate the company’s owners, its records, or the Medicare funds it received. Law enforcement officials we interviewed did not cite a single cause for the delays in contractor referrals. Officials in Los Angeles attributed the delays to a lack of on-site contractor presence in the Los Angeles area and on contractor over-emphasis on producing polished referrals. Officials in Miami attributed the delays to the referral process itself, citing too many steps in the process, and some officials were uncertain as to the cause. When we discussed these issues with CMS officials, however, they did not raise concerns about the DMERCs’ and PSC’s effectiveness in supporting law enforcement with comprehensive and timely referrals. On the contrary, the officials we interviewed expressed satisfaction with the DMERCs’ and PSC’s past performance. CMS has various means of overseeing PSCs’ program integrity efforts. To establish expectations and guidelines for the PSCs, and to monitor their program integrity efforts, CMS relies on PSC statements of work, the PIM, and PSCs’ reports on their activities. The PSC statements of work contain general information about the agency’s expectations for the PSCs, including a list of deliverables that each one is required to provide to CMS. The PIM establishes the requirements and guidance that the PSCs must follow when conducting their program integrity activities. In addition, CMS staff monitor the PSCs’ reports about their activities. Examples of these reports include updated medical review strategies and updates about the types of information requested by law enforcement for its use in investigating and prosecuting suppliers. After reviewing a contractor’s reports, CMS may suggest changes to a PSC, such as adjustments to its medical review strategy. In addition, CMS has developed plans for annually evaluating the PSCs’ program integrity activities and is in the process of implementing these evaluations. CMS has developed three evaluation tools to assess each PSC’s (1) general performance, (2) performance in conducting medical review, and (3) performance in conducting benefit integrity activities. The criteria used in each of the three evaluation tools reflect the responsibilities described in the PIM and the PSCs’ statements of work. In May and June of 2006, CMS conducted an initial evaluation of the first several months of the three PSCs’ work, using the general performance evaluation tool. In May and June of 2007, CMS will conduct the first of a planned annual, comprehensive, full-year evaluation of each PSC, including assessments of its medical review and benefit integrity efforts. CMS officials said that the agency will use the results to decide whether to renew a PSC’s contract. The officials also said that CMS will use these results to determine whether a PSC may earn award fees—a monetary performance reward for good performance—in addition to the regular payments it receives under its contract. The general performance evaluation tool is intended to assess the PSCs in four overall areas: (1) the quality of their work and work products; (2) their success in completing their work within an agreed upon budget; (3) their ability to provide work products on time; and (4) their ability to develop and maintain productive business relationships with law enforcement and suppliers. The medical review evaluation tool is intended to assess PSC performance in reviewing claims before and after payment. For example, the tool is designed to assess the degree to which a PSC reviewed claims in accordance with the medical review strategy that the PSC established for that year, and that had been approved by CMS. The tool also is intended to verify the accuracy of medical review for each PSC by using a sample of five claims that had received medical review from the respective PSC. CMS officials told us that they are currently in the process of determining whether a broader measure of a region’s improper payments will be reflected in the evaluations of PSC performance in the future. The benefit integrity evaluation tool is intended to assess a PSC’s investigations of suppliers suspected of fraud, development of supplier case referrals for the HHS OIG, and assistance to law enforcement. For instance, the benefit integrity evaluation tool requires evaluators to assess whether a PSC maintains a documented audit trail of the actions it has taken for each supplier investigation initiated. It also requires an assessment of whether a PSC’s case referrals to the HHS OIG include all of the elements for law enforcement to pursue an investigation. When CMS and its contractors fall short in protecting the Medicare program, hundreds of millions of dollars can be lost to improper payments for DMEPOS. The agency and its contractors conduct a number of program integrity activities designed to prevent and minimize improper payments for DMEPOS. However, we found that CMS’s contractors did not have sufficient automated prepayment controls to flag claims that are part of unexplained increases in billing, or that were medically improbable. Currently, the PSCs and DME MACs are not required to exchange information about their successful automated prepayment controls that could be effective in other regions. While PSCs have the flexibility to implement prepayment controls that they consider to be the most effective for their region, knowing about effective controls in other regions could provide useful information when developing their own. CMS’s recent initiative to add automated prepayment controls that would deny certain medically improbable claims is a positive step towards reducing improper DMEPOS payments. We recommend that the Administrator of CMS take two actions: Require the PSCs to develop thresholds for unexplained increases in billing—and use them to develop automated prepayment controls as one component of their manual medical review strategies. Require the DME MACs, DMERC, and PSCs to exchange information on their automated prepayment controls, and have each of these contractors consider whether the automated prepayment controls developed by the others could reduce their incidence of improper payments. CMS provided comments on a draft of this report, agreed with both of our recommendations, and stated that it has begun efforts to address them. Specifically, CMS agreed with our recommendation to require PSCs to develop thresholds for unexplained increases in billing and use them in developing their automated prepayment controls. CMS responded that it would build upon existing PSC processes for identifying billing increases and would work to improve contractors’ automated prepayment controls. CMS also discussed a related initiative it has begun to automatically deny or automatically suspend payment for services billed in excess of medically probable amounts. CMS stated that this initiative will address some of the issues that we raised in our report. We consider this initiative to be one important aspect of preventing improper payments for DMEPOS. CMS also agreed with our recommendation to require the DME MACs, DMERC, and PSCs to exchange information on their automated prepayment controls and to have each of these contractors consider whether the controls developed by the others could reduce their incidence of improper payments. CMS responded that these contractors’ Joint Operating Agreements (JOA) provide a means through which information can be shared among them, and stated that it believes the contractors are currently coordinating their automated prepayment control processes. CMS also said it would review the JOAs to ensure that information-sharing requirements are clear and are being followed by the contractors. This would be a good first step towards ensuring that information sharing occurs and that the contractors are considering the prepayment controls of other contractors when developing their own prepayment controls. CMS’s comments appear in appendix III. We provided DOJ with a draft of this report for its review. DOJ provided us with technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. We will then send a copy of this report to the Secretary of HHS, the Administrator of CMS, and the Attorney General, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report also will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (312) 220-7600 or aronovitzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. U.S. Reion A - DME MAC: Ntionl Heritge Insunce Compny; PSC: TriCentrion, LLC Reion B - DME MAC: AdminaSr Federl, Inc.; PSC: TriCentrion, LLC Reion C - DME MAC: To nnonced; PSC: TrusSoltion, LLC Reion D - DME MAC: Noridin Adminitrtive Service, LLC; PSC: Electronic D Stem Corp. To discuss the program integrity activities of the Centers for Medicare & Medicaid Services (CMS) and its contractors to prevent and minimize improper payments made for durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS), we reviewed aspects of the contractors’ medical review and benefit integrity responsibilities. We reviewed the automated prepayment controls—called edits—that contractors introduce into their payment systems to deny claims or flag them for medical review, and contractors’ benefit integrity activities. We included edits because they are generally the contractors’ first line of defense for avoiding payment of improper claims. We did not evaluate other aspects of medical review, which can include analysis or examination of claims after payment, but we discuss these functions in relation to automated prepayment controls and benefit integrity activities. We also included benefit integrity efforts—such as referring potential cases to law enforcement—because these efforts allow contractors to enlist federal law enforcement agencies to act against suppliers who have defrauded Medicare. As part of our work, we reviewed related GAO reports and CMS’s Medicare Program Integrity Manual (PIM), which establishes CMS’s guidelines for contractors’ program integrity activities. We also conducted interviews with CMS officials responsible for safeguarding Medicare, as well as contractor officials responsible for program integrity activities in three of the four DMEPOS regions— Regions A, C, and D. These contractor officials included staff at the outgoing Durable Medical Equipment Regional Carriers (DMERC) for Regions C and D, and the incoming Program Safeguard Contractors (PSC) for Regions A, C, and D. We interviewed staff at the incoming Durable Medical Equipment Medicare Administrative Contractor (DME MAC) for Region A, which had the only DME MAC contract within our selected regions that had been implemented at the time of our interviews. We also interviewed contractor staff at the Statistical Analysis Durable Medical Equipment Regional Carrier (SADMERC)—a contractor which is responsible for performing statistical analyses on national and regional DMEPOS billing data to identify potential fraud. In order to specifically review edits, we analyzed national Medicare DMEPOS claims data on atypical billing trends for suppliers and items for the first quarter of 2003 through the first quarter of 2005 generated by SADMERC. We performed further analyses on individual Medicare DMEPOS claims data from the first quarter of 2003 through the second quarter of 2005 from five states—California, Florida, Illinois, New York, and Texas. We also obtained data from the National Supplier Clearinghouse (NSC)—a contractor which is responsible for enrolling suppliers in Medicare and revoking the billing privileges of suppliers who do not comply with program guidelines. We used the NSC data to obtain information on the geographic location of the suppliers’ companies, such as by zip code and state, and to inform us as to whether the Medicare billing privileges of certain suppliers were considered by the NSC to be active, inactive, or revoked, as of October 3, 2005. In addition, we used other analyses performed by SADMERC on national DMEPOS claims data to simulate how many dollars might have been saved for periods of time from 2002 through 2005 by adding certain edits into the payment system to identify potential improper payments. We assessed the reliability of the data sets used for these analyses by reviewing documentation related to each data set, and we determined that each was sufficiently reliable to address the issues in this report. In order to specifically describe contractors’ benefit integrity efforts, we interviewed law enforcement officials on both the national and local levels who are responsible for investigating and prosecuting such cases, and for coordinating their efforts with the CMS contractors. The officials we interviewed included those from Department of Health and Human Services (HHS) Office of Inspector General (OIG), who receive suspected fraud cases from Medicare contractors and may opt to investigate the cases further; the Federal Bureau of Investigation (FBI), which may opt to assist in the investigation of Medicare fraud cases or open an independent investigation on cases for which the HHS OIG has decided not to open an investigation; and U.S. Attorney’s offices, which are responsible for the prosecution of Medicare fraud cases. In addition to interviewing headquarters officials from these organizations, we also interviewed local law enforcement officials from these agencies in Los Angeles, California; Miami, Florida; and New York City, New York. To describe CMS’s oversight of its PSCs’ program integrity efforts, we reviewed the PIM, and the PSCs’ statements of work, which describe the terms of the PSC contracts. We also read CMS’s PSC performance evaluation tools, and interviewed CMS officials about PSC oversight. In addition, we interviewed PSC contractors about CMS’s oversight of its PSCs. We performed our work from June 2005 through January 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Sheila K. Avruch, Assistant Director; Ramsey L. Asaly; Kevin Dietz; Krister P. Friday; Kelli A. Jones; Joy L. Kraybill; Suzanne M. Post; and Craig Winslow made key contributions to this report. Medicare Integrity Program: Agency Approach for Allocating Funds Should Be Revised. GAO-06-813. Washington, D.C.: September 6, 2006. Medicare Payment: CMS Methodology Adequate to Estimate National Error Rate. GAO-06-300. Washington, D.C.: March 24, 2006. Medicare: More Effective Screening and Stronger Enrollment Standards Needed for Medical Equipment Suppliers. GAO-05-656. Washington, D.C.: September 22, 2005. Medicare Contracting Reform: CMS’s Plan Has Gaps and Its Anticipated Savings Are Uncertain. GAO-05-873. Washington, D.C.: August 17, 2005. Health Care Fraud and Abuse Control Program: Results of Review of Annual Reports for Fiscal Years 2002 and 2003. GAO-05-134. Washington, D.C.: April 29, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Medicare: CMS’s Program Safeguards Did Not Deter Growth in Spending for Power Wheelchairs. GAO-05-43. Washington, D.C.: November 17, 2004. Medicare: CMS Did Not Control Rising Power Wheelchair Spending. GAO-04-716T. Washington, D.C.: April 28, 2004.
The Centers for Medicare & Medicaid Services (CMS)--the agency that administers Medicare--estimated that the program made about $700 million in improper payments for durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS) from April 1, 2005, through March 31, 2006. To protect Medicare from improper DMEPOS payments, CMS relies on three Program Safeguard Contractors (PSC), and four contractors that process Medicare claims, to conduct critical program integrity activities. GAO was requested to examine CMS's and CMS's contractors' activities to prevent and minimize improper payments for DMEPOS, and describe CMS's oversight of PSC program integrity activities. To do this, GAO analyzed DMEPOS claims data by supplier and item to identify atypical, or large, increases in billing; reviewed CMS documents; and conducted interviews with CMS and contractor officials. GAO focused its work on contractors' automated prepayment controls and described related claims analysis functions. To prevent and minimize improper DMEPOS payments, CMS's contractors conduct program integrity activities, which include performing medical reviews of certain claims before they are paid to determine whether the items meet criteria for Medicare coverage. As part of their efforts, CMS's contractors responsible for medical review use automated prepayment controls to deny claims that should not be paid or identify claims that should be reviewed. However, GAO found three shortfalls in these automated prepayment controls that make the Medicare program vulnerable to improper payments. (1) Contractors responsible for medical review did not have automated prepayment controls in place to identify questionable claims that are part of an atypically rapid increase in billing. (2) In some instances, these contractors did not have automated prepayment controls in place to identify claims for items unlikely to be prescribed in the course of routine quality medical care. CMS has recently begun an initiative to add controls of this kind for some DMEPOS items. (3) CMS does not require these contractors to share information on the most effective automated prepayment controls of the other contractors or consider adopting them. For example, Medicare might have saved almost $71 million in less than 2 years if one effective automated prepayment control designed to prevent Medicare from paying for more than one home-use hospital bed per month for a beneficiary, which was used by one of these contractors, had been used by the others. CMS oversees the PSCs' program integrity activities by providing written manuals and contracts to guide their work. As part of its oversight, CMS is implementing an annual contractor performance evaluation process, based on three evaluation tools, to assess each PSC's performance. CMS officials said that the agency will use the results of these evaluations to determine two things: whether to renew a PSC's contract, and whether a PSC may earn award fees--a monetary reward for good performance--in addition to the regular payments it receives under its contract.
As we testified in July 2007, EPA’s actions in response to our previous recommendations suggest the need for measurable benchmarks—both to serve as goals to strive for in achieving environmental justice in its rulemaking process, and to hold cognizant officials accountable for making meaningful progress. In commenting on our draft 2005 report, EPA disagreed with the four recommendations we made, saying it was already paying appropriate attention to environmental justice. A year later, in its August 24, 2006 letter to the Comptroller General, EPA responded more positively to our recommendations and committed to taking a number of actions to address these issues. Specifically, EPA’s letter stated: In response to our first recommendation, calling upon EPA’s rulemaking workgroups to devote attention to environmental justice while drafting and finalizing clean air rules, EPA responded that, to ensure consideration of environmental justice in the development of regulations, its Office of Environmental Justice was made an ex officio member of the agency’s Regulatory Steering Committee, the body that oversees regulatory policy for EPA and the development of its rules. EPA also said that (1) the agency’s Office of Policy, Economics and Innovation (responsible in part for providing support and guidance to EPA’s program offices and regions as they develop their regulations) had convened an agency-wide workgroup to consider where environmental justice might be considered in rulemakings and (2) it was developing “template language” to help rule writers communicate findings regarding environmental justice in the preamble of rules. In addition, EPA officials emphasized that its Tiering Form––a key form completed by workgroup chairs to alert senior managers to the potential issues related to compliance with statutes, executive orders, and other matters––would be revised to include a question on environmental justice. In response to our second recommendation, calling on EPA to provide workgroup members with guidance and training to help them identify potential environmental justice problems and involve environmental justice coordinators in the workgroups when appropriate, EPA said it was creating a comprehensive curriculum to meet the needs of agency rule writers. Specifically, EPA explained that its Office of Policy, Economics, and Innovation was focusing on how best to train agency staff to consider environmental justice during the regulation development process and that its Office of Air and Radiation had already developed environmental justice training tailored to the specific needs of that office. Among other training opportunities highlighted in the letter was a new on-line course offered by its Office of Environmental Justice to address a broad range of environmental justice issues. EPA also cited an initiative by the Office of Air and Radiation’s Office of Air Quality Planning and Standards to use a regulatory development checklist to ensure that potential environmental justice issues and concerns are considered and addressed at each stage of the rulemaking process. In response to our call for greater involvement of Environmental Justice coordinators in workgroup activities, EPA said that as an ex officio member of the Regulatory Steering Committee, the Office of Environmental Justice would keep the program office environmental justice coordinators informed about new and ongoing rulemakings with potential environmental justice implications via monthly conference calls with the environmental justice coordinators. In response to our third recommendation, calling on the EPA Administrator to identify the data and develop the modeling techniques needed to assess potential environmental justice impacts in economic reviews, EPA responded that its Office of Air and Radiation was reviewing information in its air models to assess which demographic data could be analyzed to predict possible environmental justice effects. EPA also stated it was considering additional guidance to address methodological issues typically encountered when examining a proposed rule’s impacts on subpopulations highlighted in the executive order. Specifically, EPA discussed creating a handbook that would discuss important methodological issues and suggest ways to properly screen and conduct more thorough environmental justice analyses. Finally, it noted that the Office of Air and Radiation was assessing models and tools to (1) determine the data required to identify communities of concern, (2) quantify environmental health, social and economic impacts on these communities, and (3) determine whether these impacts are disproportionately high and adverse. In response to our fourth recommendation, calling on the EPA Administrator to direct cognizant officials to respond more fully to public comments on environmental justice by, for example, better explaining the rationale for EPA’s beliefs and by providing supporting data, EPA said that as a matter of policy, the agency includes a response to comments in the preamble of a final rule or in a separate “Response to Comments” document in the public docket for its rulemakings. The agency noted, however, that it will re-emphasize the need to respond to comments fully, to include the rationale for its regulatory approach, and to better describe its supporting data. However, more recent information from agency officials indicates that EPA’s handling of environmental justice issues continues to fall short of our recommendations and the goals set forth in Executive Order 12898. In July 2007, we met with EPA officials to obtain current information on EPA’s environmental justice activities, focusing in particular on those most relevant to our report’s recommendations. Specifically: Regarding our first recommendation that workgroups consider environmental justice while drafting and finalizing regulations, the Office of Environmental Justice has not participated directly in any of the 103 air rules that have been proposed or finalized since EPA’s August 2006 letter. According to EPA officials, the Office of Environmental Justice did participate in one workgroup of the Office of Solid Waste and Emergency Response, and provided comments on the final agency review for the Toxic Release Inventory Reporting Burden Reduction Rule. In addition, EPA explained that the inclusion of environmental justice on its Tiering Form has been delayed because it is only one of several issues being considered for inclusion in the tiering process. Regarding our second recommendation to improve training and include Environmental Justice coordinators in workgroups when appropriate, our latest information on EPA’s progress shows mixed results. On the one hand, EPA continues to provide an environmental justice training course that began in 2002, and has included environmental justice in recent courses to help rule writers understand how environmental justice ties into the rulemaking process. On the other hand, some training courses that were planned have not yet been developed. Specifically, the Office of Policy, Economics, and Innovation has not completed the planned development of training on ways to consider environmental justice during the regulation development process. In addition, officials from EPA’s Office of Air and Radiation told us in July that they were unable to develop environmental justice training––training EPA told us in 2006 that it had already developed––due to staff turnover and other reasons. Regarding our recommendation to involve the Environmental Justice coordinators in rulemaking workgroups when appropriate, EPA officials told us that active, hands-on participation by Environmental Justice coordinators in rulemakings has yet to occur. Regarding our third recommendation that EPA identify the data and develop modeling techniques to assess potential environmental justice impacts in economic reviews, EPA officials said that their data and models have improved since our 2005 report, but that their level of sophistication has not reached their goal for purposes of environmental justice considerations. EPA officials said that to understand how development of a rule might affect environmental justice for specific communities, further improvements are needed in modeling, and more specific data are needed about the socio-economic, health, and environmental composition of communities. Only when they have achieved such modeling and data improvements can they develop guidance on conducting an economic analysis of environmental justice issues. According to EPA, among other things, economists within the Office of Air and Radiation are continuing to evaluate and enhance their models in a way that will further improve consideration of environmental justice during rulemaking. For example, EPA officials told us that a contractor would begin to analyze the environmental justice implications of a yet-to-be-determined regulation to control a specific air pollutant in July 2007. EPA expects that the study, due in June 2008, will give the agency information about what socio- economic groups experience the benefits of a particular air regulation, and which ones bear the costs. EPA expects that the analysis will serve as a prototype for analyses of other pollutants. Regarding our fourth recommendation that the Administrator direct cognizant officials to respond more fully to public comments on environmental justice, EPA officials cited one example of an air rule in which the Office of Air and Radiation received comments from tribes and other commenters who believed that the a proposed air quality standard raised environmental justice concerns. According to the officials, the agency discussed the comments in the preamble to the final rule and in the associated response-to-comments document. Nonetheless, the officials with whom we met said they were unaware of any memoranda or revised guidance that would encourage more global, EPA-wide progress on this important issue. As we testified in July 2007, EPA’s actions to date were sufficiently incomplete that measurable benchmarks are needed to achieve environmental justice goals and hold agency officials accountable for making meaningful progress on environmental justice issues. As I discussed in our February 2007 testimony, EPA deviated from key internal guidelines in developing the TRI Burden Reduction Rule. EPA’s Action Development Process provides a sequence of steps designed to ensure that scientific, economic, and policy issues are adequately addressed at the appropriate stages of rule development and to ensure cross-agency participation until the final rule is completed. Some of those steps relate to environmental justice issues. We found that EPA’s deviations were caused, in part, by pressure from the Office of Management and Budget to reduce industry’s TRI reporting burden by the end of December 2006. Throughout this process, senior EPA management has the authority to depart from the guidelines. Nevertheless, we identified several significant differences between the guidelines and the process that EPA followed in developing the TRI rule. Specifically: EPA did not follow a key element of its guidelines that is intended to identify and selection the options that best achieve the goal of the rulemaking. Specifically, an internal workgroup was charged with identifying and assessing options to reduce TRI reporting burden on industry and providing EPA management with a set of options from which management makes the final selection. However, in this case EPA management selected an altogether different option than the ones identified and assessed by the TRI workgroup. The TRI workgroup identified three options from a larger list of possible options that had been identified through a public stakeholder process, and the workgroup had scoped out these options’ costs, benefits, and feasibility. The first two options allowed facilities to use Form A in lieu of Form R for PBT chemicals, provided the facility had no releases to the environment. The third option would have created a new form, in lieu of Form R, for facilities to report “no significant change” if their releases changed little from the previous year. Under this element of EPA’s guidelines, senior management then selects the option(s) that best achieve the rule’s goals. However, based on our review of documents from the June 2005 options selection briefing for the Administrator and subsequent interviews with senior EPA officials, EPA deviated from this process. Specifically, it appears that the Office of Management and Budget (OMB) suggested an alternate option—increasing the Form A eligibility for non-PBT chemicals from 500 to 5,000 pounds—as a way of providing what OMB considered significant burden reduction. Yet the TRI workgroup had previously dropped this option from further consideration because of its impact on the TRI. In addition to reviving this burden reduction option, the Administrator directed EPA staff to expedite the rule development process after the briefing in order to meet a commitment to OMB to reduce the TRI reporting burden by the end of December 2006. Second, we found problems with the extent to which the agency sought input from internal stakeholders. EPA’s rule development guidelines are designed to ensure cross-agency participation until the rule is completed. For example, a key step in the guidelines provides for the draft rule and supporting analyses to be circulated for final agency review, when EPA’s internal and regional offices should have discussed with senior management whether they concurred with the rule. As provided for in its guidelines, EPA conducted a final agency review for the rule in July 2005. However, the draft rule and accompanying economic analysis that was circulated for review did not discuss or evaluate the impact of raising the Form A non-PBT threshold above 500 pounds because the economic analysis for this option was not yet completed. In fact, such an analysis was not completed until after EPA sent the proposed rule to OMB for review. Because the final agency review package addressed to the “no significant change” option rather than the increased Form A threshold option, the EPA Administrator and the EPA Assistant Administrator for Environmental Information likely received limited input from internal stakeholders about the option to increase the Form A non-PBT threshold prior to sending the proposed rule to OMB for official review. Indeed, a measure of how rushed the process became is that the economic analysis for the proposed rule was completed just days before the proposal was signed by the Administrator on September 21, 2005 for publication in the Federal Register. Third, our review of EPA’s rule development process found that the agency did not conduct an environmental justice analysis to substantiate its assertion that the TRI rule would not have environmental justice impacts. In its proposed rule, EPA stated that it had “no indication that either option [changing reporting requirements for non-PBT and PBT chemicals] will disproportionately impact minority or low-income communities.” EPA concluded that it “believes that the data provided under this proposed rule will continue to provide valuable information that fulfills the purposes of the TRI program…” and that “the principal consequence of finalizing today’s action would be to reduce the level of detail available on some toxic chemical releases or management.” However, the reason EPA said it had no indication about environmental justice impacts is because the agency did not complete an environmental justice assessment before it published the rule for comment in the Federal Register. Furthermore, we found that the statement concerning disproportionate impacts in the proposed rule was not written by EPA; rather, it was added by the Office of Management and Budget during its official review of the rule. After publication of the TRI rule in the Federal Register, EPA received over 100,000 comments during the rule’s public comment period. Most commenters opposed EPA’s rule because of its impact on the TRI, and some commenters, including the attorneys general of California, Connecticut, Illinois, Iowa, Maryland, Massachusetts, New Hampshire, New Jersey, New Mexico, New York, Vermont, and Wisconsin, questioned whether EPA had evaluated environmental justice issues. In addition, three members of the House Committee on Government Reform wrote to EPA Administrator Stephen Johnson in December 2005 asking that he substantiate EPA’s conclusion that the TRI rule would not disproportionately impact minority and low-income communities. In March 2006, EPA provided Congress with an environmental justice analysis showing that it had evaluated affected areas by zip codes and by proximity to facilities reporting to TRI. Table 1 summarizes the results of that analysis, which found that communities within 1 mile of facilities that reported to the TRI were about 42 percent minority, on average, compared to about 32 percent for the country as a whole. In addition those same communities are about 17 percent below the poverty level, compared to about 13 percent for the country as a whole. (Compare table 1, columns A and B.) EPA concluded that the results showed little variance in minority or poverty concentration near facilities currently reporting to the TRI compared to facilities that would be affected by the rule. (Compare table 1, columns B and C.) EPA argued that “while there is a higher proportion of minority and low-income communities in close proximity to some TRI facilities than in the population generally, the rule does not appear to have a disproportionate impact on these communities, since facilities in these communities are no more likely than elsewhere to become eligible to use Form A as a result of the rule.” However, EPA’s analysis indicates that TRI facilities are in communities that are one-third more minority and one- quarter more low-income, on average, than the U.S. population as a whole. Therefore, in comparison to the country at large, those populations would likely be disproportionately affected by an across-the-board reduction in TRI information. (Compare table 1, columns A and C.). Thus, EPA assumed that although minority and low-income communities disproportionately benefit from TRI information, this fact was irrelevant to its environmental justice analysis. However, the agency did not explain or provide support for this assumption. I would like to illustrate the impact of EPA’s rule on the TRI using a new tool that can help the public better understand environmental issues in their communities. Google Earth is a free geographic mapping tool that overlays various content, including TRI data from EPA, onto satellite photos and maps. Using this tool, the public can combine EPA’s TRI and various demographic data to view the environmental justice impacts of EPA’s TRI rule. As an example, Figure 1 shows a satellite image of southern California, including Los Angeles County and part of Orange County. The small dots indicate TRI facilities eligible for burden reduction under the TRI rule (i.e., eligible for reduced reporting on Form A). On top of every facility is a cylinder that indicates the demographic details of the people living within 1 mile of the facilities. Specifically, the cylinders’ color shows the percent of that population that is minority (e.g., red cylinders indicate a community that is 80% or more minority). The cylinders’ height shows the percent of that population living below the poverty level (e.g., taller cylinders indicate poorer communities). As the height and color of the cylinders shows, the communities in southern California near TRI- reporting facilities that are eligible for reduced reporting under EPA’s rule, are disproportionately minority and low-income. As I mentioned earlier in my testimony, EPA’s latest response to our environmental justice recommendations used TRI as an example of how the agency has improved its handling of environmental justice in the rule development process. However, our analysis shows that EPA did not complete an environmental justice assessment before concluding that the proposed TRI rule did not disproportionately affect minority and low- income populations. Even after EPA completed its analysis—in response to pressure from Members of Congress and the public—the agency concluded that the rule had no environmental justice implications despite the fact that TRI facilities are, on average, more likely to be minority and low-income than the U.S. as a whole; therefore, in comparison to the population at large, those populations would likely be disproportionately affected by an across-the-board reduction in TRI information. EPA asserted that its TRI Burden Reduction Rule will result in significant burden reduction without losing critical information, but our analyses show otherwise. We found that the rule, which went into effect for the reports that were due by July 1st of this year, reduces the quantity and detail of information currently available to many communities about toxic chemicals used, transported, or released in their environment. For each facility that chooses to file a Form A instead of Form R, the public will no longer have available quantitative information about a facility’s releases and waste management practices for a specific chemical that the facility manufactured, processed, or otherwise used. Appendix I shows the data that is contained on Form R compared to Form A. It is not possible to precisely quantify how much information will no longer be reported to the TRI on the detailed Form R because not all eligible facilities will take advantage of rule allowing them to submit the brief Form A. But using the most recent available data for calendar year 2005, it is possible to estimate what currently-reported information no longer has to be reported under EPA’s revised TRI reporting requirements. Our analysis shows that EPA’s TRI rule could, by increasing the number of facilities that may use Form A, significantly reduce the amount of information currently available to many communities about toxic chemicals used, transported, or released into their environment. EPA estimated that the impact of its change to TRI would be minimal; amounting to less than 1 percent of total pounds of chemicals released nationally that no longer would have to be reported to the TRI. However, we found that the impact on individual communities is likely to be more significant than these national aggregate totals indicate. Specifically, EPA estimated that the Form R reports that could convert to Form A account for 5.7 million pounds of releases not being reported to the TRI (only 0.14% of all TRI release pounds) and an additional 10.5 million pounds of waste management activities (0.06% of total waste management pounds). However, to understand the potential impact of EPA’s changes to TRI reporting requirements more locally, we used 2005 TRI data to estimate the number of detailed Form R reports that would no longer have to be submitted in each state and found that nearly 22,200 Form R reports (28 percent) could convert to Form A under EPA’s new Form A thresholds. The number of possible conversions ranges by state from 25 in Vermont (27.2 percent of all Form Rs formerly filed in the state) to 2,196 Form Rs in Texas (30.6 percent of Form Rs formerly filed in the state). As figure 2 shows, Alaska, California, Connecticut, Georgia, Hawaii, Illinois, Maryland, Massachusetts, New Jersey, New York, North Carolina, Rhode Island, and Texas could lose at least 30 percent of Form R reports. Another way to characterize the impact of the TRI burden reduction rule is to examine what currently-available public data may no longer be reported about specific chemicals at the state level. The number of chemicals for which only Form A information may be reported under the TRI rule ranges from 3 chemicals in South Dakota to 60 chemicals in Georgia. That means that the specific quantitative information currently reported about those chemicals may no longer appear in the TRI database. Figure 3 shows that thirteen states—Delaware, Georgia, Hawaii, Iowa, Maryland, Massachusetts, Missouri, North Carolina, Oklahoma, Tennessee, Vermont, West Virginia, and Wisconsin—could no longer have quantitative information about at least 20 percent of TRI-reported chemicals in the state. The impact of the loss of information from these Form R reports can also be understood in terms of the number of facilities that could be affected. We estimated that 6,620 facilities nationwide could chose to convert at least one Form R to a Form A, and about 54 percent of those would be eligible to convert all their Form Rs to Form A. That means that approximately 3,565 facilities would not have to report any quantitative information about their chemical releases and other waste management practices to the TRI, according to our estimates. The number of facilities ranges from 5 in Alaska to 302 in California. For example, in 2005, the ATSC Marine Terminal, bulk petroleum storage facility in Los Angeles County, California, reported releases of 13 different chemicals—including highly toxic benzene, toluene, and xylene—to the air. Although the facility’s releases totaled about 5,000 pounds, it released less than 2,000 pounds of each chemical, and therefore would no longer have to file Form Rs for them. As figure 4 shows, more than 10 percent of facilities in each state except Idaho would no longer have to report any quantitative information to the TRI. The most affected states are Colorado, Connecticut, the District of Columbia, Hawaii, Massachusetts, and Rhode Island, where more than 20 percent of facilities could choose to not disclose the details of their chemical releases and other waste management practices by submitting a Form A in lieu of a Form R. Furthermore, our analysis found that citizens living in 75 counties in the United States—including 11 in Texas, 10 in Virginia, and 6 in Georgia— could have no quantitative TRI information about local toxic pollution. With regard to EPA’s assertion that the TRI rule will result in significant reduction in industry’s reporting burden—the primary rationale for the rule—the agency estimated that the rule would save, at most, $5.9 million. (See table 2.) According to our calculations, these costs savings amount to only 4 percent of the $147.8 million total annual cost to industry of TRI reporting. Also, as we testified in February 2007, EPA’s estimate likely overestimates the total cost savings (i.e., burden reduction) that will be realized by reporting facilities because not all eligible facilities will choose to file a Form A in lieu of Form R. Environmental justice and the TRI are related and mutually dependent. Our assessment shows that EPA did not fully consider important impacts of its TRI rule, including environmental justice impacts on communities, when evaluating the rule’s costs and benefits. That is, EPA’s recent changes to TRI reporting requirements will reduce the amount and specificity of toxic chemical information that facilities have to report to the TRI and that will, in turn, impact communities’ ability to assess environmental justice and other issues. It is unlikely that the TRI rule provides, as EPA asserts, significant reduction in industry’s reporting burden without losing critical environmental information. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or members of the Subcommittee may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact John Stephenson at (202) 512-3841 or stephensonj@gao.gov. Key contributors to this testimony were Steven Elstein, Terrance Horner, Richard Johnson, and Daniel Semick. Other contributors included Mark Braza, Karen Febey, Kate Cardamone, Alison O’Neill, and Jennifer Popovic. Facilities must submit a detailed Form R report for each designated chemical that they use in excess of certain thresholds, or certify that they are not subject to the reporting requirement by submitting a brief Form A certification statement. Form A captures general information about the facility, such as address, parent company, industry type, and basic information about the chemical or chemicals it released. Form R includes the same information, but also requires facilities to provide details about the quantity of the chemical they disposed or released onsite to the air, water, land, and injected underground, or transferred for disposal or release off-site. Table 3 provides details about the specific information the facilities provide on the Form R and Form A. We analyzed 2005 TRI data provided by EPA to estimate the number of Form Rs that could convert to Form A in each state and determined the possible impacts that this could have on data about specific chemicals and facilities. EPA released the 2005 data in March 2007; 2006 data is expected in spring of 2008. Table 4 provides our estimates of the total number of Form Rs eligible to convert to Form A, including the percent of total Form Rs submitted by facilities in each state. The table also provides our estimates of the number of unique chemicals for which no quantitative information would have to be reported in each state, including the percent of total chemicals reported in each state. The last two columns provide our estimates for the number of facilities that would longer have to provide quantitative information about their chemical releases and waste management practices, including the percent of total facilities reporting in each state.
A 1994 Executive Order sought to ensure that minority and low-income populations are not subjected to disproportionately high and adverse health or environ-mental effects from agency activities. In a July 2005 report, GAO made several recommendations to improve the Environmental Protection Agency's (EPA) adherence to these environmental justice principles. The Emergency Planning and Community Right-to-Know Act of 1986 (EPCRA) requires certain facilities that use toxic chemicals to report their releases to EPA, which makes the information available in the Toxics Release Inventory (TRI). Since 1995, facilities may submit a brief statement (Form A) in lieu of the more detailed Form R if releases of a chemical do not exceed 500 pounds a year. In January 2007, EPA finalized the TRI Burden Reduction Rule, quadrupling to 2,000 pounds what facilities can release before having to disclose details using Form R. Congress is considering codifying the Executive Order and requiring EPA to implement GAO's environ-mental justice recommendations. Other legislation would amend EPCRA to, among other things, revert the Form A threshold to 500 pounds or less. In this testimony, GAO discusses (1) EPA's response to GAO's environmental justice recommendations, (2) the extent to which EPA followed internal guidelines when developing the TRI rule and (3) the impact of the rule on communities and facilities. EPA initially disagreed with GAO's July 2005 environmental justice recommendations, saying it was already paying appropriate attention to the issue. GAO called on EPA to improve the way it addresses environmental justice in its economic reviews and to better explain its rationale by providing data to support the agency's decisions. A year later, EPA responded more positively to the recommendations and committed to a number of actions. However, based on information that EPA has subsequently provided, GAO concluded in a July 2007 testimony that EPA's actions to date were incomplete and that measurable benchmarks were needed to hold agency officials accountable for achieving environmental justice goals. In developing the TRI rule, EPA did not follow key aspects of its internal guidelines, including some related to environmental justice. EPA did not follow guidelines to ensure that scientific, economic, and policy issues are addressed at appropriate stages of rule development. For example, EPA asserted that the rule would not have environmental justice impacts; however, it did not support this assertion with adequate analysis. The omission is significant because many TRI facilities that no longer have to submit Form R reports are located in minority and low-income communities; and the reduction in toxic chemical information could disproportionately affect them. EPA's TRI rule will reduce the amount of information about toxic chemical releases without providing significant savings to facilities. A total of nearly 22,200 Form R reports from some 3,500 facilities are eligible to convert to Form A under the rule. While EPA says the aggregate impact of these conversions will be minimal, the effect on individual states and communities may be significant, as illustrated below. Although making significantly less information available to communities, GAO estimated that the rule would save companies little--an average of less than $900 per facility.
The following examples illustrate how New Zealand, Finland, EU, UK, Australia, and Hong Kong have addressed well known tax administration issues. New Zealand, like the U.S., addresses various national objectives through a combination of tax expenditures and discretionary spending programs. Tax expenditures are the amount of revenue that a government forgoes to provide some type of tax relief for taxpayers in special circumstances, such as the Earned Income Tax Credit in the United States. In New Zealand tax expenditures are known as tax credits. New Zealand has overcome obstacles to evaluating these related programs at the same time to better judge whether they are working effectively. Rather than separately evaluating certain government services, New Zealand completes integrated evaluations of tax expenditures and discretionary spending programs to analyze their combined effects. Using this approach, New Zealand can determine, in part, whether tax expenditures and discretionary spending programs work together to accomplish government goals. One example is the Working For Families (WFF) Tax Credits program, which is an entitlement for families with dependent children to promote employment. Prior to the introduction of WFF in 2004, New Zealand’s Parliament discovered that many low-income families were not better off from holding a low-paying job, and those who needed to pay for childcare to work were generally worse off in low-paid work compared to receiving government benefits absent having a job. This prompted Parliament to change its in-work incentives and financial support including tax expenditures. The Working for Families Tax Credits program differs from tax credit programs in the United States in that it is an umbrella program that spans certain tax credits administered by the Inland Revenue Department (IRD) as well as discretionary spending programs administered by the Ministry of Social Development (MSD). IRD collects most of the revenue and administers the tax expenditures for the government. Being responsible for collecting sensitive taxpayer information, IRD must maintain tax privacy and protect the integrity of the New Zealand tax system. MSD administers the WFF’s program funds and is responsible for collecting data that includes monthly income received by its beneficiaries. This required that IRD and MSD keep separate datasets, making it difficult to assess the cumulative effect of the WFF program. To understand the cumulative effect of changes made to the WFF program and ensure that eligible participants were using it, New Zealand created a joint research program between IRD and MSD from October 2004 to April 2010. The joint research program created linked datasets between IRD and MSD. Access to sensitive taxpayer information was restricted to IRD employees on the joint research program and to authorized MSD employees only after they were sworn in as IRD employees. The research provided information on key outcomes that could only be tracked through the linked datasets. The research found that the WFF program aided the transition from relying on government benefits to employment, as intended. It also found that a disproportionate number of those not participating in the program were from an indigenous population, which faced barriers to taking advantage of the WFF. Barriers included the perceived stigma from receiving government aid, the transaction costs of too many rules and regulations, and the small amounts of aid for some participants. Changes made by Parliament to WFF based on these findings provided an additional NZ$1.6 billion (US$1.2 billion) per year in increased financial entitlements and in-work support to low- to middle-income families. While economic differences exist between the New Zealand and U.S. tax systems, both systems use tax expenditures (i.e., tax credits in New Zealand). Unlike the United States, New Zealand has developed a method to evaluate the effectiveness of tax expenditures and discretionary spending programs through joint research that created interagency linked datasets. New Zealand did so while protecting confidential tax data from unauthorized disclosure. In 2005, we reported that the U.S. had substantial tax expenditures but lacked clarity on the roles of the Office of Management and Budget (OMB), Department of the Treasury, IRS, and federal agencies with discretionary spending programs responsibilities to evaluate the tax expenditures. Consequently, the U.S. lacked information on how effective tax expenditures were in achieving their intended objectives, how cost- effective benefits were achieved, and whether tax expenditures or discretionary spending programs worked well together to accomplish federal objectives. At that time, OMB disagreed with our recommendations to incorporate tax expenditures into federal performance management and budget review processes, citing methodological and conceptual issues. However, in its fiscal year 2012 budget guidance, OMB instructed agencies, where appropriate, to analyze how to better integrate tax and spending policies that have similar objectives and goals. Finland better ensures accurate withholding of taxes from taxpayers’ income, lowers its costs, and reduces taxpayers’ filing burdens through Internet-based electronic services. In 2006, Finland established a system, called the Tax Card, to help taxpayers estimate a withholding rate for the individual income tax The Tax Card, based in the Internet, covers Finland’s national tax, municipality tax, social security tax, and church tax. The Tax Card is accessed through secured systems in the taxpayer’s Web bank or an access card issued by Finland’s government. The Tax Card system enables taxpayers to update their withholding rate as many times as needed throughout the year, adjusting for events that increase or decrease their income tax liability. When completed, the employer is notified of the changed withholding tax rate through the mail or by the employee providing a copy to the employer. According to the Tax Administration, about a third of all taxpayers using the Tax Card, about 1.4 to 1.6 million people, change their withholding percentages at least annually. Finland generally refunds a small amount of the withheld funds to taxpayers (e.g., it refunded about 8 percent of the withheld money in 2007). Finland also has been preparing income tax returns for individuals over the last 5 years. The Tax Administration prepares the return for the tax year ending on December 31st based on third-party information returns, such as reporting by employers on wages paid or by banks on interest paid to taxpayers. During April, the Tax Administration mails the pre-prepared return for the taxpayer’s review. Taxpayers can revise the paper form and return it to the Tax Administration in the mail or revise the return electronically online. According to Tax Administration officials, about 3.5 million people do not ask to change their tax return and about 1.5 million will request a tax change. Electronic tax administration is part of a government-wide policy to use electronic services to lower the cost of government and encourage growth in the private sector. According to Tax Administration staff, increasing electronic services to taxpayers helps to lower costs. Overall, the growth of electronic services, according to Finnish officials, has helped to reduce Tax Administration staff by over 11 percent from 2003 to 2009 while improving taxpayer service. According to officials of the Finnish government as well as public interest and trade groups, the Tax Card and pre-prepared return systems were established under a strong culture of national cooperation. For the pre- prepared return system to work properly, Finland’s business and other organizations who prepare information returns had to accept the burden to comply in filing accurate returns promptly following the end of the tax year. Finland’s tax system is positively viewed by taxpayers and industry groups according to our discussions with several industry and taxpayer groups. They stated that Finland has a simple, stable tax system which makes compliance easier to achieve. As a result, few individuals use a tax advisor to help prepare and file their annual income tax return. In contrast to Finland’s self-described “simple and stable” system, the U.S. tax system is complex and constantly changing. Regarding withholding estimation, Finland’s Tax Card system provides taxpayers an online return system for regularly updating the tax amount withheld. For employees in the U.S., the IRS’s Website offers a withholding calculator to help employees determine whether to contact their employer about revising their tax withholding. Finland’s system prepares a notice to the employer which can be sent through the mail or delivered in person, whereas in the U.S. the taxpayer must file a form with the employer on the amount to be withheld based on the estimation system’s results. In the U.S., individual income tax returns are completed by taxpayers—not the IRS—using information returns mailed to their homes and their own records. Taxpayers are to accurately prepare and file an income tax return by its due date. In Finland, very few taxpayers use a tax advisor to prepare their annual individual income tax return. Unlike in Finland, U.S. individual taxpayers heavily rely on tax advisors and tax software to prepare their annual return. In the U.S. about 90 percent of individual income tax returns are prepared by paid preparers or by the taxpayer using commercial software. The European Union seeks to improve tax compliance through a multilateral agreement on the exchange of information on interest earned by each nation’s individual taxpayers. This agreement addresses common issues with the accuracy and usefulness of information exchanged among nations that have differing technical, language, and formatting approaches for recording and transmitting such information. Under the directive, adopted in June 2003, the 27 EU members and 10 other participants agreed to share information about income from interest payments made to individuals who are citizens in another member nation. With this information, the tax authorities are able to verify whether their citizens properly reported and paid tax on the interest income. The directive provides the basic framework for the information exchange, defining essential terms and establishing automatic information exchange among members. As part of the directive, 3 EU member nations as well as the 5 European nonmember nations agreed to apply equivalent measures (i.e., withholding tax with revenue sharing described below) during a transition period through 2011, rather than automatically exchanging information. Under this provision, a 15 percent withholding tax gradually increases to 35 percent by July 1, 2011. The withholding provision included a revenue- sharing provision, which authorizes the withholding nation to retain 25 percent of the tax collected and transfer the other 75 percent to the nation of the account owner. The directive also requires the account owner’s home nation to ensure that withholding does not result in double taxation by granting a tax credit equal to the amount of tax paid to the nation in which the account is located. A September 2008 report to the EU Council described the status of the directive’s implementation. During the first 18 months of information exchange and withholding, data limitations such as incomplete information on the data exchanged and tax withheld created major difficulties for evaluating the directive’s effectiveness. Further, no benchmark was available to measure the effect of the changes. According to EU officials, the most common administrative issues, especially during the first years of implementation of the directive, have been the identification of the owner reported in the computerized format. It is generally recognized that a Taxpayer Identification Number (TIN) provides the best means of identifying the owner. However, the current directive does not require paying agents to record a TIN. Using names has caused problems when other EU member states tried to access the data. For example, a name that is misspelled cannot be matched. In addition, how some member states format their mailing address may have led to data-access problems. EU officials told us that the monitoring role by the EU Commission, the data-corrections process, and frequent contacts to resolve specific issues have contributed to effective use of the data received by EU member states. Other problems with implementing the directive include identifying whether investors moved their assets into categories not covered by the directive (e.g., shifting to equity investments), and concerns that tax withholding provisions may not be effective because withholding rates were low until 2011 when the rate became 35 percent. The EU also identified problems with the definition of terms, making uniform application of the directive difficult. Generally these terms identify which payments are covered by the directive, who must report under the directive, and who owns the interest for tax purposes. Nevertheless, EU officials stated that the quality of data has improved over the years. The EU officials have worked with EU member nations to resolve specific data issues which have contributed to the effective use of the information exchanged under the directive. Comparing the EU and U.S. practices on exchanging tax information with other countries, the U.S. agreements and the directive both allow for automatic information exchange. The U.S. is part of the Convention on Multilateral Administrative Assistance in Tax Matters, which includes exchange of information agreement provisions and has been ratified by 15 nations and the U.S. However, the U.S. is prevented by IRC 6105 from releasing data about the extent of information exchanged with treaty partners or the type of information exchange used. The UK promotes accurate tax withholding and reduces taxpayers’ filing burdens by calculating withholding rates for taxpayers and requiring that payers of certain types of income withhold taxes at standard rates. The UK uses information reporting and withholding to simplify tax reporting and tax payments for individual tax returns. Both the individual taxpayer and Her Majesty’s Revenue and Customs (HMRC)—the tax administrator—are to receive information returns from third parties who make payments to a taxpayer such as for bank account interest. A key element of this system is the UK’s Pay As You Earn (PAYE) system. Under the PAYE system HMRC calculates an amount of withholding from wages to meet a taxpayer’s liability for the current tax year. According to HMRC officials, the individual tax system in the UK is simple for most taxpayers who are subject to PAYE. PAYE makes it unnecessary for wage earners to file a yearly tax return, unless special circumstances apply. For example, wage earners do not need to file a return unless income from interest, dividends, or capital gains exceeds certain thresholds or if deductions need to be reported. Therefore, a tax return may not be required because most individuals do not earn enough of these income types to trigger self-reporting. For example, the first £10,100 (US$16,239) of capital gains income is exempt from being reported on tax returns. Even so, payers of interest or dividend income withhold tax before payments are made. PAYE also facilitates the payment of tax liabilities by periodic withholding at source for wages under the PAYE system. The withheld amount may be adjusted by HMRC to collect any unpaid taxes from previous years or refund overpayments. HMRC annually notifies the taxpayer and employer of the amount to withhold. Taxpayers can provide HMRC with additional information that can be used to adjust their withholding. If taxpayers provide the information on their other income such as self-employment earnings, rental income, or investment income, HMRC can adjust the PAYE withholding. Individuals not under the PAYE system are required to file a tax return after the end of the tax year based on their records. In addition, HMRC uses information reporting and tax withholding as part of its two step process to assess the compliance risks on filed returns. In the first step, individual tax returns are reviewed for inherent compliance risks because of the taxpayers’ income level and complexity of the tax return. For example, wealthy taxpayers with complex business income are considered to have a higher compliance risk than a wage earner. In the second step, information compiled from various sources—including information returns and public sources—is analyzed to identify returns with a high compliance risk. According to HMRC officials, these assessments have allowed HMRC to look at national and regional trends. HMRC is also attempting to uncover emerging compliance problems by combining and analyzing data from the above sources as well as others. The UK and U.S. both have individual income tax returns and use information reporting and tax withholding to help ensure the correct tax is reported and paid. However, differences exist between the countries’ systems. The U.S. has six tax rates that differ among five filing statuses for individuals (i.e., single, married, married filing separately, surviving spouse, or head of household) and covering all types of taxable income. In general, the UK system has three tax rates, one tax status (individuals), and a different tax return depending on the taxable income (e.g., self-employed or employed individuals). U.S. income tax withholding applies to wages paid but not interest and dividend income as it does in the UK. U.S. wage earners, rather than the IRS, are responsible for informing employers of how much income tax to withhold, if any, and must annually self assess and file their tax returns unlike most UK wage earners. Another major difference is that the U.S. automatically matches data from information returns and the withholding system to data from the income tax return to identify individuals who underreported income or failed to file required returns. Matching is done using a unique identifier TIN. HMRC officials told us that they have no automated document-matching process and the UK does not use TINs as a universal identifier, which is needed for wide-scale document matching. HMRC officials said that they may do limited manual document matching in risk assessments and compliance checks. For example, HMRC manually matches some taxpayer data—such as name, address, date of birth—from bank records to corresponding data on tax returns. The closest form of unique identifier that HMRC uses with some limitations is the national insurance number. HMRC officials said they are barred from using the national insurance number for widespread document matching, which leaves HMRC with some unmatchable information returns. High wealth individuals often have complex business relationships involving many entities they may directly control or indirectly influence and these relationships may be used to reduce taxes illegally or in a manner that policymakers may not have intended. Australia has developed a compliance program that requires these taxpayers to provide information on these relationships and that provides such taxpayers additional guidance on proper tax reporting. The Australian High Net Wealth Individuals (HNWI) program focuses on the characteristics of wealthy taxpayers that affect their tax compliance. According to the Australian Tax Office (ATO), in the mid-1990s, ATO was perceived as enforcing strict sanctions on the average taxpayers but not the wealthy. By 2008, ATO found that high-wealth taxpayers, those with a net worth of more than A$30 million (US$20.9 million), had substantial income from complex arrangements, which made it difficult for ATO to identify and assure compliance. ATO concluded that the wealthy required a different tax administration approach. ATO set up a special task force to improve its understanding of wealthy taxpayers, identify their tax planning techniques, and improve voluntary compliance. Due to some wealthy taxpayers’ aggressive tax planning, which ATO defines as investment schemes and legal structures that do not comply with the law, ATO quickly realized that it could not reach its goals for voluntary compliance for this group by examining taxpayers as individual entities. To tackle the problem, ATO began to view wealthy taxpayers as part of a group of related business and other entities. Focusing on control over related entities rather than on just individual tax obligations provided a better understanding of wealthy individuals’ compliance issues. The HNWI approach followed ATO’s general compliance model. The model’s premise is that tax administrators can influence tax compliance behavior through their responses and interventions. For compliant wealthy taxpayers, ATO developed a detailed questionnaire and expanded the information on business relationships that these taxpayers must report on their tax return. For noncompliant wealthy taxpayers, ATO is to assess the tax risk and then determine the intensity of ATO’s compliance interventions. According to FY 2008 ATO data, the HNWI program has produced financial benefits. Since the establishment of the program in 1996, ATO has collected A$1.9 billion (US$1.67 billion) in additional revenue and reduced revenue losses by A$1.75 billion (US$1.5 billion) through compliance activities focused on highly wealthy individuals and their associated entities. ATO’s program focus on high wealth individuals and their related entities has been adopted by other tax administrators. By 2009, nine other countries, including the U.S., had formed groups to focus resources on high wealth individuals. Like the ATO, the IRS is taking a close look at high income and high wealth individuals and their related entities. As announced by the Commissioner of Internal Revenue in 2009, the IRS formed the Global High Wealth (GHW) industry to take a holistic approach to high-wealth individuals. The IRS consulted with the ATO as GHW got up and running to discuss the ATO’s approach to the high wealth population, as well as its operational best practices. As of February 2011, GHW field groups had a number of high wealth individuals and several of their related entities under examination. One difference is that Australia has a separate income tax return for high- wealth taxpayers to report information on assets owned or controlled by HNWIs. In contrast, the U.S. has no separate tax return for high-wealth individuals and generally does not seek asset information from individuals. According to IRS officials, the IRS traditionally scores the risk of individual tax returns based on individual reporting characteristics rather than a network of related entities. However, the IRS has been examining how to do risk assessments of networks through its GHW program since 2009. Another difference is that the ATO requires HNWIs to report their business networks and the IRS currently does not. Although withholding of taxes by payers of income is a common practice to ensure high levels of taxpayer compliance, Hong Kong’s Salaries Tax does not require withholding by employers and tax administrators and taxpayers appear to find a semiannual payment approach effective. Hong Kong’s Salaries Tax is a tax on wages and salaries with a small number of deductions (e.g., charitable donations and mortgage interest). The Salaries Tax is paid by about 40 percent of the estimated 3.4 million wage earners in Hong Kong, while the other 60 percent are exempt from Salaries Tax. To collect the Salaries Tax, Hong Kong does not use periodic (e.g., biweekly or monthly) tax withholding by employers. Rather, Hong Kong collects it through two payments by taxpayers for a tax year. Since the tax year runs for April 1st through March 31st, a substantial portion of income for the tax year is earned by January (i.e., income for April to December), the taxpayer is to pay 75 percent of the tax for that tax year in January (as well as pay any unpaid tax from the previous year). The remaining 25 percent of the estimated tax is to be paid 3 months later in April. By early May, Inland Revenue Department (IRD)—the tax administator— annually prepares individual tax returns for taxpayers based on information returns filed by employers. Taxpayers review the prepared return and make any revisions such as including deductions (e.g., charitable contributions), and file with IRD. IRD then will review the returns and determine if any additional tax is due. If the final Salaries Tax assessment turns out to be higher than the estimated tax previously assessed, IRD is to notify the taxpayer who is to pay the additional tax concurrently with the January payment of estimated tax for the next tax year. Hong Kong’s tax system is positively viewed by tax experts, practitioners, and a public opinion expert based on our discussions with these groups. They generally believe that low tax rates, a simple system, and cultural values contribute to Hong Kong’s collection of the Salaries Tax through the two payments rather than periodic withholding. Tax rates are fairly low, starting at 2 percent of the adjusted salary earned and not exceeding 15 percent. Further, tax experts told us that the Salaries Tax system is simple. Few taxpayers use a tax preparer because the tax form is very straightforward and the tax system is described as “stable.” Further, an expert on public opinion in Hong Kong told us that taxpayers fear a loss of face if recognized as not complying with tax law. This cultural attitude helps promote compliance. Unlike Hong Kong’s twice a year payments for the Salaries Tax, the U.S. income tax on wages relies on periodic tax withholding in which tax is paid as income is earned. IRS provides guidance (e.g., Publication 15) on how and when employers should withhold income tax (e.g., every other week) and deposit the withheld income taxes (e.g., monthly). Further, the U.S. individual tax rates are higher and the system is more complex. These tax rates begin at 10 percent and progress to 35 percent. Further, the U.S. taxes many forms of income beyond salary income on the individual tax return. IRS officials learn about foreign tax practices by participating in international organizations of tax administrators. By doing so, IRS officials say they regularly exchange ideas and learn about other practices. As the IRS learns of these practices, it may adopt the practice based on the needs of the U.S. tax system. IRS is actively involved in two international tax organizations and one jointly run program that addresses common tax administration issues. First, the IRS participates with the Center for Inter-American Tax Administration (CIAT), a forum made up of 38 member countries and associate members, which exchange experiences with the aim of improving tax administration. CIAT, formed in 1967, is to promote integrity and transparency of tax administrators, promote compliance, and fight tax fraud. The IRS participates with CIAT in designing and developing tax administration products and with CIAT’s International Tax Planning Control committee. Second, the IRS participates with the Organisation for Economic Co-operation and Development (OECD) Forum on Tax Administration (FTA), which is chaired by the IRS Commissioner during 2011. The FTA was created in July 2002 to promote dialogue between tax administrations and identify good tax administration practices. Since 2002, the forum has issued over 50 comparative analyses on tax administration issues to assist member and selected nonmember countries. IRS and OECD officials exchange tax administration knowledge. For example, the IRS is participating in the OECD’s first peer review of information exchanged under tax treaties and tax information exchange agreements. Under the peer-review process, senior tax officials from several OECD countries examine each selected member’s legal and regulatory framework and evaluate members’ implementation of OECD tax standards. The peer-review report on IRS information exchange practices is expected to be published in mid 2011. As for the jointly run program, the Joint International Tax Shelter Information Centre (JITSIC) attempts to supplement ongoing work in each country to identify and curb abusive tax schemes by exchanging information on these schemes. JITSIC was formed in 2004 and now includes Australia, Canada, China, Japan, South Korea, United Kingdom and the U.S. tax agencies. According to the IRS, JITSIC members have identified and challenged the following highly artificial arrangements: a cross-border scheme involving millions of dollars in improper deductions and unreported income on tax returns from retirement account withdrawals; highly structured financing transactions created by financial institutions that taxpayers used to generate inappropriate foreign tax credit benefits; and made-to-order losses on futures and options transactions for individuals in other JITSIC jurisdictions, leading to more than $100 million in evaded taxes. To date, the IRS has implemented one foreign tax administration practice. As presented earlier, Australia’s HNWI program examines sophisticated legal structures that wealthy taxpayers may use to mask aggressive tax strategies. In 2009, the OECD issued a report on the tax compliance problems of wealthy individuals and concluded that “high net worth individuals pose significant challenges to tax administrations” due to their complex business dealings across different business entities, higher tax rates, and higher likelihood of using aggressive tax planning or tax evasion. According to an IRS official, during IRS’s participation in the OECD High Wealth Project in 2008, IRS staff began to realize the value of this program to the U.S. tax system. As we stated, the IRS now has a program focused on wealthy individuals and their networks. Chairman Baucus, Ranking Member Hatch, and Members of the Committee, this concludes my statement. I would be happy to answer any questions you may have at this time. For further information regarding this testimony, please contact Michael Brostek, Director, Strategic Issues, on (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Thomas Short, Assistant Director; Leon Green; John Lack; Alma Laris; Andrea Levine; Cynthia Saunders; and Sabrina Streagle. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Internal Revenue Service (IRS) and foreign tax administrators face similar issues regardless of the particular provisions of their laws. These issues include, for example, helping taxpayers prepare and file returns, and assuring tax compliance. GAO was asked to describe (1) how foreign tax administrators have approached issues that are similar to those in the U.S. tax system and (2) whether and how the IRS identifies and adopts tax administration practices used elsewhere. To do this, GAO reviewed documents and interviewed six foreign tax administrators as well as tax experts, tax practitioners, taxpayers, and trade group representatives. GAO also examined documents and met with IRS officials. This preliminary information is based on GAO's ongoing work for the Committee to be completed at a later date. Foreign and U.S. tax administrators use many of the same practices such as information reporting, tax withholding, providing web-based services, and finding new approaches for tax compliance. These practices, although common to each system, have important differences. Although differences in laws, culture, or other factors likely would affect the transferability of foreign tax practices to the U.S., these practices may provide useful insights for policymakers and the IRS. For example, New Zealand integrates evaluations of its tax and discretionary spending programs. The evaluation of its Working For Families tax benefits and discretionary spending, which together financially assist low- and middle-income families to promote employment, found that its programs aided the transition to employment but that it still had an underserved population; these findings likely would not have emerged from separate evaluations. GAO previously has reported that the U.S. lacks clarity on evaluating tax expenditures and related discretionary spending programs and does not generally undertake integrated evaluations. In Finland, electronic tax administration is part of a government policy to use electronic services to lower the cost of government and encourage private-sector growth. Overall, according to Finnish officials, electronic services have helped to reduce Tax Administration staff by over 11 percent from 2003 to 2009 while improving taxpayer service. IRS officials learn about these practices based on interactions with other tax administrators and participation in international organizations, such as the Organisation for Economic Co-operation and Development. In turn, IRS may adopt new practices based on the needs of the U.S. tax system. For example, in 2009, IRS formed the Global High Wealth Industry program. IRS consulted with Australia about its approach and operational practices.
In October 2008, we reported that Interior’s policies for identifying and evaluating lease parcels and bids differ in key ways depending on whether the lease is located offshore—and therefore overseen by OEMM—or onshore—and therefore overseen by BLM. These differences follow. Identifying lease parcels. OEMM’s and BLM’s methods for identifying areas to lease vary significantly. Specifically: For offshore leases, OEMM—as prescribed by the Outer Continental Lands Act—lays out 5-year strategic plans for the areas it plans to lease and establishes a schedule for offering leases. OEMM offers leases for competitive bidding, and all eligible companies may submit written sealed bids, referred to as bonus bids, for the rights to explore, develop, and produce oil and gas resources on these leases, including drilling test wells. For onshore leases, BLM—which must follow the Federal Onshore Oil and Gas Leasing Reform Act of 1987—is not required to develop a long-term leasing plan and instead relies on the industry and the public to nominate areas for leasing. BLM selects lands to lease from these nominations, as well as some parcels it has identified on its own. In some cases, BLM, like MMS, offers leases through a competitive bidding process, but with bonus bids received in an oral auction rather than in a sealed written form. Evaluating bids. OEMM and BLM differ in their regulations and policies for evaluating whether the bids received for areas offered for lease are sufficient. Specifically: For offshore leases, OEMM compares sealed bids with its own independent assessment of the value of the potential oil and gas in each lease. After the bids are received, OEMM—using a team of geologists, geophysicists, and petroleum engineers assisted by a software program— conducts a technical assessment of the potential oil and gas resources associated with the lease and other factors to develop an estimate of their fair market value. This estimate becomes the minimally acceptable bid and is used to evaluate the bids received. The bidder that submits the highest bonus bid that meets or exceeds MMS’s estimate of the fair market value of a lease is awarded the lease. These rights last for a set period of time, referred to as the primary term of the lease, which may be 5, 8, or 10 years, depending on the water depth. If no bids equal or exceed the minimally acceptable bid, the lease is not awarded but is offered at a subsequent sale. According to OEMM, since 1995, the practice of rejecting bids that fall below the minimally acceptable bid and re-offering these leases at a later sale has resulted in an overall increase in bonus receipts of $373 million between 1997 and 2006. For onshore leases, BLM relies exclusively on competitors, participating in an oral auction, to determine the lease’s market value. Furthermore, BLM, unlike OEMM, does not currently employ a multidisciplinary team with the appropriate range of skills or appropriate software to develop estimates of the oil and gas reserves for each lease parcel, and thus, establish a market and resource-based minimum acceptable bid. Instead, BLM has established a uniform national minimum acceptable bid of at least $2 per acre and has taken the position that as long as at least one bid meets this $2 per acre threshold, the lease will be awarded to the highest bidder. Importantly, onshore leases that do not receive any bids in the initial offer are available noncompetitively the day after the lease sale and remain available for leasing for a period of 2 years after the competitive lease sale. Any of these available leases may be acquired on a first-come, first-served basis subject to payment of an administrative fee. Prior to 1992, BLM offered primary terms of 5 years for competitively sold leases and 10 years for leases issued noncompetitively. Since 1992, BLM has been required by law to only offer leases with 10-year primary terms whether leases are sold competitively or issued noncompetitively. Oil and gas activity has generally increased over the past 20 years, and our reviews have found that Interior has—at times—been unable to meet its oversight obligations for (1) completing environmental inspections, (2) verifying oil and gas production, (3) performing environmental monitoring in accordance with land use plans, and (4) using categorical exclusions to streamline environmental analyses required for certain oil and gas activities. Specifically: Completing environmental inspections. In June 2005, we reported that, with the increase in oil and gas activity, BLM had not consistently been able to complete its required environmental inspections—the primary mechanism to ensure that companies are complying with various environmental laws and lease stipulations. At the time of our review, BLM officials explained that because staff were spending increasing amounts of time processing drilling permits, they had less time to conduct environmental inspections. Verifying oil and gas production. In September 2008, we reported that neither BLM nor OEMM was meeting its statutory obligations or agency targets for inspecting certain leases and metering equipment used to measure oil and gas production, raising uncertainty about the accuracy of oil and gas measurement. For onshore leases, BLM had completed only a portion of its production verification inspections—with some BLM offices completing all of their required inspections and others completing portions as small as one quarter of their required inspections––because its workload has substantially grown in response to increases in onshore drilling. For offshore leases, OEMM had completed about half of its required production inspections in 2007 because of ongoing cleanup work related to Hurricanes Katrina and Rita. Additionally, in our ongoing work, we have found that Interior has not consistently updated its oil and gas measurement regulations. Specifically, OEMM has routinely reviewed and updated its measurement regulations, whereas BLM has not. Accordingly, OEMM has updated its measurement regulations six times since 1998, whereas BLM has not updated its measurement regulations since 1989. Performing environmental monitoring. In June 2005, we reported that four of the eight BLM field offices we visited had not developed any resource monitoring plans to help track management decisions and determine if desired outcomes had been achieved, including those related to mitigating the environmental impacts of oil and gas development. We concluded that without these plans, land managers may be unable to determine the effectiveness of various mitigation measures attached to drilling permits and decide whether these measures need to be modified, strengthened, or eliminated. Officials offered several reasons for not having these plans, including that staff that could have been used to develop such plans had been busy with processing an increased number of drilling permits, as well as budget constraints. Using categorical exclusions. Our report issued today on BLM’s use of categorical exclusions—authorized under section 390 of the Energy Policy Act of 2005 to streamline the environmental analysis required under the National Environmental Policy Act (NEPA) when approving certain oil and gas activities—identifies some benefits but raises numerous questions about how and when BLM should use these categorical exclusions. First, our analysis found that BLM used section 390 categorical exclusions to approve over one-quarter of its applications for drilling permits from fiscal years 2006 to 2008. While these categorical exclusions generally increased the efficiency of operations, some BLM field offices, such as those with recent environmental analyses already completed, were able to benefit more than others. Second, we found that BLM’s use of section 390 categorical exclusions was frequently out of compliance with both the law and agency guidance and that a lack of clear guidance and oversight by BLM were contributing factors. We found several types of violations of the law, such as BLM offices approving more than one oil or gas well under a single decision document and drilling a new well after statutory time frames had lapsed. We also found examples, in 85 percent of field offices reviewed, where officials did not comply with agency guidance, most often by failing to adequately justify the use of a categorical exclusion. While many of these violations and noncompliance were technical in nature, others were more significant and may have thwarted NEPA’s twin aims of ensuring that BLM and the public are fully informed of environmental consequences of BLM’s actions. Third, we found that a lack of clarity in both section 390 of the act and BLM’s guidance has raised serious concerns. Specifically: (1) Fundamental questions about what section 390 categorical exclusions are and how they should be used have led to concerns that BLM may be using these categorical exclusions in too many—or too few—instances; for example, there is disagreement as to whether BLM must screen section 390 categorical exclusions for circumstances that would preclude their use or whether their use is mandatory; (2) Concerns about key concepts underlying the law’s description of these categorical exclusions have arisen—specifically, whether section 390 categorical exclusions allow BLM to exceed development levels, such as number of wells to be drilled, analyzed in supporting NEPA documents without conducting further analysis; and (3) Vague or nonexistent definitions of key criteria in the law and BLM guidance have led to varied interpretations among field offices and concerns about misuse and a lack of transparency. In light of our findings from this report, we recommended that BLM take steps to improve the implementation of section 390 of the act by clarifying agency guidance, standardizing decision documentation, and ensuring compliance through more oversight. We also suggested that Congress may wish to consider amending the Energy Policy Act of 2005 to clarify and resolve some of the key issues identified in our report. In our past work, we have identified several areas where Interior may be missing opportunities to increase revenue by fundamentally shifting the terms of federal oil and gas leases. As we reported in September 2008, (1) federal oil and gas leasing terms result in the U.S. government receiving one of the smallest shares of oil and gas revenue when compared to other countries and (2) Interior’s royalty rate, which does not change to reflect changing prices and market conditions, led to pressure on Interior and Congress to periodically change royalty rates. We also reported that Interior was doing far less than some states to encourage development of leases. Specifically: The U.S. government receives one of the lowest shares of revenue for oil and gas resources compared with other countries and resource owners. For example, we reported the results of a private study in 2007 showing that the revenue share the U.S. government collects on oil and gas produced in the Gulf of Mexico ranked 93rd lowest of the 104 revenue collection regimes around the world covered by the study. Further, the study showed that some countries had increased their shares of revenues as oil and gas prices rose and, as a result, could collect between an estimated $118 billion and $400 billion, depending on future oil and gas prices. However, despite significant changes in the oil and gas industry over the past several decades, we found that Interior had not systematically re-examined how the U.S. government is compensated for extraction of oil and gas for over 25 years. Since 1980, in part due to Interior’s inflexible royalty rate structure, Congress and Interior have been pressured—with varying success—to periodically adjust royalty rates to respond to current market conditions. For example, in 1980, a time when oil prices were high compared to today’s prices, in inflation-adjusted terms, Congress passed a windfall profit tax, which it later repealed in 1988 after oil prices had fallen significantly from their 1980 level. Later, in November 1995—during a period with relatively low oil and gas prices—the federal government enacted the Outer Continental Shelf Deep Water Royalty Relief Act (DWRRA) which provided for “royalty relief,” the suspension of royalties on certain volumes of initial production, for certain leases in the Gulf of Mexico in depths greater than 200 meters during the 5 years after passage of the act—1996 through 2000. For leases issued during these 5 years, litigation established that MMS lacked the authority under the act to impose thresholds. As a result, companies are now receiving royalty relief even though prices are much higher than at the time the DWRRAwas enacted. In June 2008, we estimated that future foregone royalties from all the DWRRA leases issued from 1996 through 2000 could range widely—from a low of about $21 billion to a high of $53 billion. Finally, in 2007, the Secretary of the Interior twice increased the royalty rate for future Gulf of Mexico leases. In January, the rate for deep water leases was raised to 16.66 percent. Later, in October, the rate for all future lease in the Gulf, including those issued in 2008, was raised to 18.75 perce Interior estimated these actions would increase federal oil and gas revenues by $8.8 billion over the next 30 years. The January 2007 increase nt. applied only to deep water Gulf of Mexico leases; the October 2007 increase applied to all water depths in the Gulf of Mexico. We concluded that these royalty rate increases appeared to be a response by Interior to the high prices of oil and gas that have led to record industry profits and raised questions about whether the existing federal oil and gas fiscal system gives the public an appropriate share of revenues from oil and gas produced on federal lands and waters. Further, the royalty rate increases did not address industry profits from existing leases. Existing leases, with lower royalty rates, would likely remain highly profitable as long as they produced oil and gas or until oil and gas prices fell significantly. In addition, in choosing to increase royalty rates, Interior did not evaluate the entire oil and gas fiscal system to determine whether or not these increases were sufficient to balance investment attractiveness and appropriate returns to the federal government for oil and gas resources. On the other hand, according to Interior, it did consider factors such as industry costs for outer continental shelf exploration and development, tax rates, rental rates, and expected bonus bids. Further, because the increased royalty rates are not flexible with respect to oil and gas prices, Interior and Congress could again be under pressure from industry or the public to further change the royalty rates if and when oil and gas prices either fall or rise. Finally, these past royalty changes only affected Gulf of Mexico leases and did not address onshore leases. Interior’s OEMM and BLM varied in the extent to which they encouraged development of federal leases, and both agencies did less than some states and private landowners to encourage lease development. As a result, we concluded that Interior may be missing opportunities to increase domestic oil and gas production and revenues. Specifically, in the Gulf of Mexico, OEMM varied the lease length in accordance with the depth of water over which the lease is situated. For example, leases issued in shallow water depths typically have lease terms of 5 years, whereas leases in the deepest areas of the Gulf of Mexico have 10 year primary terms; shallower water tends to be nearer to shore and to be adjacent to already developed areas with pipeline infrastructure in place, while deeper water tends to be further out, have less available infrastructure to link up with, and generally present greater challenges associated with the depth of the wells themselves. In contrast, BLM issues leases with 10 year primary terms, regardless of whether the lease happens to lie adjacent to a fully developed field with the necessary pipeline infrastructure to carry the product to market, or whether it is in a remote location with no surrounding infrastructure. Furthermore, BLM also uses 10 year primary terms in the National Petroleum Reserve-Alaska, where it is significantly more difficult to develop oil fields because of factors including the harsh environment. We also examined selected states and private landowners that lease land for oil and gas development and found that some did more than Interior to encourage lease development. For example, to provide a greater financial incentive to develop leased land, the state of Texas allowed lessees to pay a 20 percent royalty rate for the life of the lease if production occurred in the first 2 years of the lease, as compared to 25 percent if production occurred after the fourth year. In addition, we found that some states and private landowners also did more to structure leases to reflect the likelihood of finding oil and gas. For example, New Mexico issued shorter leases and could require lessees to pay higher royalties for properties in or near known producing areas and allowed longer leases and lower royalty rates in areas believed to be more speculative. Officials from one private landowners’ association told us that they too were using shorter lease terms, ranging from as little as 6 months to 3 years, to ensure that lessees were diligent in developing any potential oil and gas resources on their land. Louisiana and Texas also issued 3-year onshore leases. While the existence of lease terms that appear to encourage faster development of some oil and gas leases suggest a potential for the federal government to also do more in this regard, it is important to note that it can take several years to complete the required environmental analyses needed for lessees to receive approval to begin drilling on federal lands. To address what we believed were key weaknesses in this program, while acknowledging potential differences between federal, state, and private leases, we recommended that the Secretary of the Interior develop a strategy to evaluate options to encourage faster development of oil and gas leases on federal lands, including determining whether methods to differentiate between leases according to the likelihood of finding economic quantities of oil or gas and whether some of the other methods states use could effectively be employed, either across all federal leases or in a targeted fashion. In so doing, we recommended that Interior identify any statutory or other obstacles to using such methods and report the findings to Congress. We also noted that Congress may wish to consider directing the Secretary of the Interior to: convene an independent panel to perform a comprehensive review of the federal oil and gas fiscal system, and direct MMS and other relevant agencies within Interior to establish procedures for periodically collecting data and information and conducting analyses to determine how the federal government take and the attractiveness for oil and gas investors in each federal oil and gas region compare to those of other resource owners and report this information to Congress. Our past work and preliminary findings have identified shortcomings in Interior’s IT systems for managing oil and gas royalty and production information. In September 2008, we reported that Interior’s oil and gas IT systems did not include several key functionalities, including (1) limiting a company’s ability to make adjustments to self-reported data after an audit had occurred and (2) identifying missing royalty reports. Since September 2008, MMS has made improvements in identifying missing royalty reports, but it is too early to assess their effectiveness, and we remain concerned with the following issues: MMS’s ability to maintain the accuracy of production and royalty data has been hampered because companies can make adjustments to their previously entered data without prior MMS approval. Companies may legally make changes to both royalty and production data in MMS’s royalty IT system for up to 6 years after the initial reporting month, and these changes may necessitate changes in the royalty payment. However, MMS’s royalty IT system currently allows companies to make adjustments to their data beyond the allowed 6-year time frame. As a result of the companies’ ability to make these retroactive changes, within or outside of the 6-year time frame, the production data and required royalty payments can change over time—even after MMS completes an audit—complicating efforts by agency officials to reconcile production data and ensure that the proper royalties were paid. MMS’s royalty IT system is also unable to automatically detect instances when a royalty payor fails to submit the required royalty report in a timely manner. As a result, cases in which a company stops filing royalty reports and stops paying royalties may not be detected until more than 2 years after the initial reporting date, when MMS’s royalty IT system completes a reconciliation of volumes reported on the production reports with the volumes on their associated royalty reports. Therefore, it remains possible under MMS’s current strategy that the royalty IT system may not identify instances in which a payor stops reporting until several years after the report is due. This creates an unnecessary risk that MMS may not be collecting accurate royalties in a timely manner. Additionally, in July 2009, we reported that MMS’s IT system lacked sufficient controls to ensure that royalty payment data were accurate. While many of the royalty data we examined from fiscal years 2006 and 2007 were reasonable, we found significant instances where data were missing or appeared erroneous. For example, we examined gas leases in the Gulf of Mexico and found that, about 5.5 percent of the time, lease operators reported production, but royalty payors did not submit the corresponding royalty reports, potentially resulting in $117 million in uncollected royalties. We also found that a small percentage of royalty payors reported negative royalty values, which cannot happen, potentially costing $41 million in uncollected royalties. In addition, royalty payors claimed gas processing allowances 2.3 percent of the time for unprocessed gas, potentially resulting in $2 million in uncollected royalties. Furthermore, we found significant instances where royalty payor-provided data on royalties paid and the volume and or the value of the oil and gas produced appeared erroneous because they were outside the expected ranges. Moreover, in preliminary findings on Interior’s procedures for ensuring oil and gas produced from federal leases is properly accounted, we found that: The IT systems employed by both BLM and MMS fail to communicate effectively with one another resulting in cumbersome data transfers and data errors. For example, in order to complete the weekly transfer of oil and gas production data between MMS and BLM, MMS staff must copy all production data onto a disk, which then must be sent to BLM’s building where it is subsequently uploaded into BLM’s IT system. Furthermore, according to BLM staff, the production uploads are currently not working as intended. Frequently, an operator may make adjustments to production records, which results in the creation of a new record. When these new records are uploaded into BLM’s IT system, they should replace—or overlay—the prior record. However, due to technical problems, new reports are not correctly overlaying the previously uploaded production reports; instead they are creating duplicate or triplicate production reports for the same operator and month. According to BLM’s IT system coordinator, this will likely complicate BLM’s production accountability work. BLM’s efforts to use gas production data acquired remotely from gas wells through its Remote Data Acquisition for Well Production program to facilitate production inspections have shown few results after 5 years of funding and at least $1.5 million spent. Currently, BLM is only receiving production data from approximately 50 wells via this program, and it has yet to use the data to complete a production inspection, making it difficult to assess its utility. To address weaknesses we identified in our September 2008 report, we recommended that the Secretary of the Interior, among other things: finalize the adjustment line monitoring specifications for modifying its royalty IT system and fully implement the IT system so that MMS can monitor adjustments made outside the 6-year time frame, and ensure that any adjustments made to production and royalty data after compliance work has been completed are reviewed by appropriate staff, and develop processes and procedures by which MMS can automatically identify when an expected royalty report has not been filed in a timely manner and contact the company to ensure it is complying with both applicable laws and agency policies. In addition, to address weaknesses identified in our July 2009 report, we made a number of recommendations to MMS intended to improve the quality of royalty data by improving its IT systems’ edit checks, among other things. Interior’s management and oversight of its RIK program has raised concerns as to whether Interior is receiving the correct royalty volumes of oil and gas. Both we and Interior’s Inspector General have issued reports detailing deficiencies in both program management and management ethics, including (1) problems with reporting the benefits of the RIK program to Congress, (2) Interior’s failure to use available third-party data to confirm gas production volumes, (3) inappropriate relationships between RIK staff and industry representatives, and (4) insufficient controls for monitoring natural gas imbalances, among others. Specifically: In September, 2008, we reported that MMS’s annual reports to Congress did not fully describe the performance of the RIK program and, in some instances, may have overstated the benefits of the program. For example, MMS’s calculation that from fiscal years 2004 to 2006, MMS sold royalty oil and gas for $74 million more than it would have received in cash was based on assumptions, not actual sales data, about the prices at which royalty payors would have sold their oil or gas had they sold it on the open market. MMS did not report to Congress that even small changes in these assumptions could result in very different estimates. Also, MMS’s calculation that the RIK program cost about $8 million less to administer than the royalty-in-value program over the same period did not include certain costs, such as IT costs shared with the royalty-in-value program that would likely have changed the results of MMS’s administrative cost analysis. In addition, MMS’s annual reports to Congress lacked important information on the financial results of individual oil sales that Congress could use to more broadly assess the performance of the RIK program. In 2008, we also reported that MMS’s oversight of its natural gas production volumes was less robust than its oversight of oil production volumes. As a result, MMS did not have the same level of assurance that it is collecting the gas royalties it is owed. For instance, for oil, MMS compared companies’ self-reported oil production data with third-party pipeline meter data from OEMM’s liquid verification system, which records oil volumes flowing through pipeline metering points. Using these third-party pipeline statements to verify production volumes reported by companies would have provided a check against companies’ self-reported statement of royalty payments owed to the federal government. While analogous data were available from OEMM’s gas verification system, MMS did not use these third-party data to verify the company-reported production numbers. As of February 2009, MMS had begun to use the gas verification system. Interior’s Inspector General also issued a report in September 2008 which found that the program had suffered from ethical shortcomings. In particular, the Inspector General found that a program manager had been paid for consulting by an oil and gas company in violation of agency rules and that up to one-third of all RIK staff had inappropriately socialized and received gifts from oil and gas companies. Most recently, in August 2009, we found that MMS risks losing millions of dollars in revenue from the RIK natural gas program due to inadequate oversight. Specifically: MMS lacks the necessary information to quantify revenues resulting from imbalances—instances when MMS receives a percentage of total production other than its entitled royalty percentage. MMS does not know the exact amount it is owed as a result of natural gas imbalances because it lacks at least three types of information. First, it does not verify all gas production data to ensure it receives its entitled percentage of RIK gas. Second, MMS lacks information on how to price gas imbalances and when interest will begin accruing on imbalances for leases that have terminated from the program or those leases where production has ceased. Finally, MMS could be forgoing revenue because it lacks information on daily gas imbalances. MMS also may be forgoing revenue because it does not audit operator data to ensure it has received its entitled royalty percentage. Although MMS has procedures for reconciling imbalances and uses OEMM’s gas verification system data where available, we found that it has not assessed the risk of forgoing audits at those measurement points where it does not have complete data with which to verify that it has been allocated its entitled percentage of gas. Although the RIK guidance letter to operators states MMS’s right to audit operator information related to RIK gas produced and delivered, MMS has not done so because it has considered its verification of operator-generated data to be sufficient. MMS has also claimed that it has saved money as a result of not auditing and that this is a benefit of the RIK program. However, other royalty owners and members of the oil and gas industry regularly audit operator-reported data to ensure that they have received the gas they are entitled to. To address weaknesses we identified in our September 2008 and August 2009 reports, we recommended that the Director of MMS, among other things: improve calculations of the benefits and costs of the RIK program and the information presented to Congress by (1) calculating and presenting a range of the possible performances of the RIK sales in accordance with Office of Management and Budget guidelines; (2) reevaluating the process by which it calculates the early payment savings; (3) disclosing the costs to acquire, develop, operate, and maintain RIK-specific IT systems; and (4) disaggregating the oil sales data to show the variation in the performances of individual sales. improve MMS’s oversight of the RIK gas program and help ensure that the nation receives its fair share of RIK gas by (1) establishing policies and procedures to ensure outstanding imbalances are valued appropriately and that the correct amount of interest is charged; (2) monitoring daily gas imbalances and determining whether legislative changes are needed to require operators to deliver the royalty percentage on a daily basis; (3) auditing the operators and imbalance data; (4) promulgating RIK program regulations; and (5) establishing procedures, with reasonable deadlines, for resolving and collecting all RIK gas imbalances in a timely manner. In conclusion, over the past several years, we and others have examined oil and gas leasing at the Department of the Interior many times and determined such leasing to be in need of fundamental reform across a wide range of Interior’s functions. As Congress considers what fundamental changes are needed in how Interior structures its oversight of oil and gas leasing, we believe that our and others’ past work provides a road map for successful reform of the agency’s oversight functions. If steps are not taken to effectively manage these challenges, we remain concerned about the agency’s ability to manage the nation’s oil and gas and provide reasonable assurance that the U.S. government is collecting an appropriate amount of revenue for the extraction and use of these scarce resources. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions that you or other Members of the Committee may have at this time. For further information on this statement, please contact Frank Rusco at (202) 512-3841 or ruscof@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Other staff that made key contributions to this testimony include Ron Belak, Ben Bolitzer, Melinda Cordero, Nancy Crothers, Heather Dowey, Glenn C. Fischer, Cindy Gilbert, Richard Johnson, Mike Krafve, Jon Ludwigson, Jeff Malcolm, Alison O’Neill, Justin Reed, Holly Sasso, Dawn Shorey, Karla Springer, Barbara Timmerman, Maria Vargas, Tama Weinberg, and Mary Welch. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In fiscal year 2008, the Department of the Interior collected over $22 billion in royalties and other fees related to oil and gas. Within Interior, the Bureau of Land Management (BLM) manages onshore federal oil and gas leases, and the Minerals Management Service's (MMS) Offshore Energy and Minerals Management (OEMM) manages offshore leases. A federal lease gives the lessee rights to explore for and develop the lease's oil and gas resources. MMS is responsible for collecting royalties for oil and gas produced from both onshore and offshore leases. GAO has reviewed federal oil and gas management and revenue collection and found many material weaknesses. This testimony is based primarily on key findings from past GAO reports and some preliminary findings from ongoing work. These findings focus on Interior's: (1) policies for oil and gas leasing, (2) oversight of oil and gas production, (3) royalty regime and policies to boost oil and gas development, (4) oil and gas information technology (IT) systems, and (5) royalty-in-kind program. GAO's past reports provided recommendations that Interior officials report that they are working to implement. GAO's numerous evaluations of federal oil and gas management have identified five key areas where Interior could provide greater oversight: Interior's policies for leasing offshore and onshore oil and gas differed in key ways. Specifically, MMS sets out a 5-year strategic plan identifying both a leasing schedule and the areas it would lease. In contrast, BLM relies on industry and others to nominate areas for leasing, then selected lands to lease from these nominations, as well as areas it had identified. Additionally, MMS independently assessed the value of the lease and reserves the right to reject low bids, whereas BLM relied exclusively on the results of its bid auctions to determine the lease's market value. Oil and gas activity has generally increased in recent years, and Interior has, at times, been unable to meet its legal and agency mandated oversight obligations for (1) completing required environmental inspections, (2) verifying oil and gas production, (3) using categorical exclusions to streamline environmental analyses required for certain oil and gas activities, and (4) performing environmental monitoring in accordance with land use plans. Interior may be missing opportunities to fundamentally shift the terms of federal oil and gas leases and increase revenues. Compared to other countries, the United States receives one of the lowest shares of revenue for oil and gas. In addition, Interior's royalty rate, which does not change to reflect changing prices and market conditions, has at times, led to pressure on Interior and Congress to periodically change royalty rates in response to market conditions. Interior also has done less than some states and private landowners to encourage lease development and may be missing opportunities to increase production and, subsequently, revenues. Interior's oil and gas IT systems lack key functionalities. GAO's past work found that MMS's ability to maintain the accuracy of oil and gas production and royalty data was hampered by two key limitations in its IT system (1) it did not limit companies' ability to adjust self-reported data after MMS had audited them, and (2) it did not identify missing royalty reports. Preliminary GAO findings have also identified technical problems within BLM's IT systems and their compatibility with MMS's IT systems. Interior's royalty-in-kind program, in which oil and gas producers submit royalties in oil and gas rather than cash, continues to face challenges. GAO found problems with MMS's analysis of program benefits that were reported to Congress, and that MMS failed to use third party data to verify companies' self-reported data. Meanwhile, Interior's Inspector General identified major ethical lapses, including inappropriate relationships between MMS royalty-in-kind program officials and industry representatives.
The Commercial Space Launch Act Amendments of 1988 (CSLAA) established the foundation for the current U.S. policy to potentially provide federal payment for a portion of claims by third parties for injury, damage, or loss that results from a commercial launch or reentry accident. A stated goal of CSLAA was to provide a competitive environment for the U.S. commercial space launch industry. The act also provides for, among other things, government protection against some losses—referred to as indemnification—while still minimizing the cost to taxpayers. All FAA- licensed commercial launches and reentries by U.S. companies, whether unmanned or manned and from the United States or overseas, are covered by federal indemnification for third-party damages that result from the launch or reentry. The U.S. indemnification policy has a three-tier approach for sharing liability between the government and the private sector to cover third- party claims: The first tier of coverage is the responsibility of the launch company and is handled under an insurance policy purchased by the launch company or through a demonstration of financial responsibility. As part of FAA’s process for issuing a license for a commercial launch or reentry, the agency determines the amount of insurance a launch company is required to purchase so the launch company can compensate third parties and the federal government for any claims for damages that occur as a result of activities carried out under the license. The amount of insurance coverage that FAA can require is capped at a maximum of $500 million for damages to third parties and $100 million for damages to federal government property and personnel. The second tier of coverage is to be provided by the U.S. government and covers any third-party claims in excess of the specific first-tier amount up to a limit of approximately $3.1 billion. For the federal government to be liable for these claims, Congress would need to appropriate funds. The third tier of coverage is for third-party claims in excess of the second tier. Like the first tier, this third tier is the responsibility of the launch company, which may seek insurance above the required first- tier amount for this coverage. Unlike the first tier, no insurance is required under federal law. The amount of insurance coverage that FAA requires launch companies to purchase is intended to reflect the greatest dollar amount of loss to third parties and the federal government for bodily injury and property damage that can be reasonably expected to result from a launch or reentry accident. This amount is known as the maximum probable loss (MPL). For each launch license that it issues, FAA determines MPL values for third parties with the intent of estimating the greatest dollar amount of losses that could be expected from a launch or reentry accident, which have no less than a 1 in 10 million chance of occurring. Given the structure of the indemnification policy, an MPL calculation that overestimates the amount of losses that can reasonably be expected would increase the costs for launch companies by requiring them to purchase more coverage than is necessary, while an MPL calculation that does not account for losses that can be reasonably expected would expose the federal government to excess risk. FAA has used a statistical approach to calculate MPL values that considers three primary elements: a number of estimated casualties, an estimate of the average loss per casualty, and the estimated amount of losses from property damage. Prior to recent changes that we discuss in greater detail later in this report, FAA estimated these three elements in the following ways: Number of estimated casualties. To estimate the number of direct casualties that could result from a launch accident, including serious injuries and deaths, FAA has (1) estimated the total area of the debris field that would result in the event of a launch vehicle’s self- destruction system being triggered as a safety measure, (2) estimated the area within that debris field that would cause a casualty if a person were within it, and (3) multiplied that area by the maximum population density of the selected population center. In addition, FAA estimated the number of casualties that could result from secondary effects, such as fires and collapsing buildings, to be 150 percent of the number of direct casualties. FAA added direct and secondary casualties together to estimate the total number of casualties. Estimated loss per casualty. To determine the cost of judgments and settlements that would result from the estimated casualties, FAA has used $3 million as an estimate of the average loss per casualty. FAA has used this $3 million figure, referred to as the “cost-of- casualty amount” throughout this report, since 1988, when it was selected to be a conservative estimate of jury awards for transportation casualties at that time. Estimated losses due to property damage. FAA has estimated losses due to property damage to be 50 percent of its estimated losses from casualties. FAA has added this amount to the estimated losses from casualties to calculate the total MPL. We reported in 2012 that the average third-party MPL value for active launch licenses, and thus the average amount of insurance coverage required for commercial launches, was about $99 million, with a range of about $23 million to $267 million. According to FAA, it issued five active licenses in 2016, which had an average third-party MPL of about $51 million, and ranged from $10 million to $99 million. FAA has revised its MPL calculation methodology to address some identified weaknesses, but because another identified weakness remains unaddressed, the current methodology may expose the government to excess risk. FAA-contracted experts and others have found that FAA’s estimates of the number of casualties have tended to be too high, that estimates of losses from property damage may have been too high in some cases and too low in others, and that the $3 million cost-of-casualty amount was likely too low because it is based on outdated information. FAA implemented a revised process for estimating the number of casualties and reduced the 50 percent factor it uses to estimate losses due to property damage by half, and these revisions have tended to reduce insurance requirements, with some exceptions. However, FAA has not addressed the identified weakness of an outdated cost-of- casualty amount, which may indicate that FAA is not requiring launch companies to have insurance coverage for losses that can be reasonably expected and therefore may be exposing the government to excess risk. FAA-contracted experts and others have identified weaknesses in the three primary elements of the MPL calculation. An FAA contractor, ACTA Inc., reported to FAA in 2005 that FAA’s method for estimating the number of casualties produced numbers of casualties that were too high. ACTA found that the scenario that FAA based its casualty estimates on—that the inert debris resulting from the self-destruction of the launch vehicle would land on the area in the vicinity of the launch site with the highest population density—was implausible. In other words, if a launch vehicle’s self-destruct mechanism were triggered as a safety measure, the resulting debris could not reach these population areas because the vehicle would be destroyed before it could reach them. For a vehicle to reach the maximum populated area, the vehicle’s self-destruct system would have to fail. An ACTA official said that under more realistic scenarios for losses from launch accidents that have no less than a 1 in 10 million chance of occurring, the inert debris caused from the self- destruction of a launch vehicle would likely land on less densely populated areas, and thus the estimated number of casualties would be lower in most cases. FAA officials that we spoke with confirmed that their method for estimating the number of casualties was not as reasonable or realistic as it could have been, and that it was generally too conservative. ACTA also reported to FAA in 2006 that FAA’s assumption that secondary casualties would be 150 percent of direct casualties was very conservative. ACTA also found two weaknesses in FAA’s method for estimating losses from property damage as 50 percent of losses from casualties, one that could lead to overestimates and one that could lead to underestimates. First, if a launch accident affected a residential area, FAA’s estimate of losses from property damage would likely be too high because residential structures have relatively low values compared to losses from casualties. Second, as ACTA reported in 2007, in some accidents the number of casualties may be low but property losses could still be very large, in which case FAA’s estimates of losses from property damage would be too low. For example, a launch vehicle could strike an unoccupied structure that is very expensive, such as a neighboring launch complex. In addition, ACTA and GAO have found that basing the cost-of-casualty amount on outdated information is a weakness that indicates that the $3 million amount is likely too low. ACTA reported to FAA in 2006 that the $3 million cost-of-casualty amount was probably too low, and that data at that time suggested a more accurate value could be as much as three times higher. In a 2012 report on commercial space launches, we found that because FAA’s $3 million cost-of-casualty amount had not changed since FAA began using it in 1988, it may not adequately represent the current cost of liability for injury or death caused by commercial space launch failures. Based in part on this finding, we recommended that FAA reassess its maximum probable loss methodology—including assessing the reasonableness of the assumptions used. Subsequently, FAA contracted with the Science and Technology Policy Institute (STPI) in 2015 to study the damages awarded in judgments and settlements for casualties in airplane crashes, as well as other data that might inform an updated cost-of-casualty estimate. While STPI was limited in the amount of data it could access, as we discuss later in more detail, STPI concluded in 2016 that FAA’s cost-of-casualty amount should be increased based on its analysis of the data it collected. STPI also reported that this conclusion was unanimously confirmed in its interviews with industry experts. STPI’s study indicated that a cost-of-casualty amount of approximately $6 million per casualty might be appropriate, but the study did not make a recommendation of what amount FAA should use. The combined impact of these issues on the amount of insurance coverage that launch companies are required to purchase is unclear. While FAA contractors have identified some weaknesses that likely overstate MPL values and some weaknesses that likely understate MPL values, they have not reported the magnitude of the effects of these weaknesses on insurance requirements. Further, because some weaknesses likely overstate MPL values, while others likely understate MPL values, to some extent the effect of one may offset the effect of another (see fig. 1). In April 2016, FAA implemented a new MPL calculation methodology that incorporates revisions to the processes for estimating the number of casualties and losses due to property damage to address the weaknesses identified in these elements of the MPL calculation. Estimating the number of casualties. In 2016, ACTA completed the design of a method for estimating the number of casualties that uses computer software to simulate a range of possible launch accidents that are intended to be more realistic than the scenario used in FAA’s previous method. FAA officials stated that FAA has used the revised method to calculate MPL values since April 2016. FAA’s revised method for estimating the number of direct casualties in the MPL calculation uses additional data and modeling software to simulate more realistic accident scenarios. The data used in FAA’s previous method were a list of potential debris for each launch vehicle, which was supplied by the launch company, and the population densities of areas near the launch site. The revised method uses additional vehicle launch data, such as launch trajectory and fuel type, as well as failure rates for different phases of flight and types of failures. FAA uses software known as the Range Risk Analysis Tool to create physics-based simulations of possible accidents using these data, and it assigns each simulated accident a probability of occurrence based on the failure rates of the different elements of the launch vehicle. Based on the types of debris that are simulated, where the debris are predicted to fall, and population data, the software estimates a number of direct casualties for each simulated accident. FAA officials told us that FAA also revised how it incorporates secondary casualties into its MPL calculation. In each simulated accident, secondary casualties from inert debris and explosive debris are estimated separately. Secondary casualties from inert debris are assumed to be 25 percent of direct casualties from inert debris, while secondary casualties from explosive debris continue to be estimated as 150 percent of direct casualties. FAA simulates millions of launch accidents with different probabilities of occurrence and records the number of casualties that result in each simulation. Taken together, the estimated numbers of casualties create a “risk profile” of the launch, which is a representation of the estimated number of casualties that would occur for accidents with a range of probabilities of occurrence, as shown in figure 2. FAA then uses the number of casualties that are estimated to have a 1 in 10 million chance of occurring in its MPL calculation. FAA officials stated that the revised methodology generally reduces the number of casualties estimated and ultimately the amount of insurance coverage required. FAA officials said that they calculated MPL values with both the revised method and the previous method for some recent launches to compare the results. FAA officials noted that in these cases the revised method generally estimated lower numbers of casualties than the previous method, although there were exceptions. ACTA reported while developing the revised methodology for estimating casualties that it consistently produced lower MPL values than the previous method. Estimating losses from property damage. FAA has revised the factor it uses to estimate losses from property damage in the MPL calculation and is also testing a new process. FAA officials stated that they have decreased estimates of property damage losses from 50 percent of losses due to casualties to 25 percent. FAA made this revision because it is testing a new process for estimating losses from property damage that was also designed by ACTA, and officials said that in test applications, this process has produced estimates of property damage losses that are lower than 25 percent of losses due to casualties. As such, they said that they believe that the lower factor for property damage losses estimates is still conservative but more realistic than the previous estimates. As of January 2017, FAA had not determined whether it will use the new process that it is testing in future MPL calculations, or continue to base estimates of property damage on losses from casualties. The process that ACTA designed for estimating losses from property damage is intended to be integrated with the software tool that is now used to estimate the number of casualties in the MPL calculation. This revised process estimates losses from property damage using the same simulated launch accidents that are used to estimate the number of casualties. Property damage estimates are based on damage models that simulate the effect of inert and explosive debris impacting different types of structures, such as residential and commercial. FAA officials have stated that they have begun to test this revised process but have not yet implemented it in MPL calculations. These officials said that they have not determined whether the new process is necessary because the impact of property damage on the total MPL value is relatively minor, and continuing to use a more simple method may be a more effective use of limited FAA resources. However, an ACTA official noted that in some cases losses from property damage can be the most significant contributor to the total MPL value, and raised concerns about continuing to calculate losses from property damage as a factor of losses due to casualties. FAA has not addressed the weakness identified in the cost-of-casualty amount used in the MPL calculation, and, as of January 2017, it had not determined when it would do so. FAA officials said that they have identified potential steps to address the outdated data on which the cost- of-casualty amount is based, which may include revising the amount. However, FAA’s potential steps to address the outdated data are not fully developed, and FAA has not established time frames for taking action. FAA officials said that their first step would be to evaluate more current information to form the basis for revising the cost-of-casualty amount. However, FAA has faced challenges in identifying reliable information because each of the sources that its contractor reviewed had significant limitations. Airplane crash damages. FAA and STPI both noted that the preferred method for updating the cost-of-casualty amount would be to base it on legal judgments and settlements from casualties in airplane crashes, given that there have not been any commercial space launch accidents that have resulted in casualties. However, STPI reported that it was only able to access very limited information on settlement awards from airplane crashes. As a result, STPI said it could not make a reliable estimate of the average loss per casualty based on this information because it was not a representative sample of all awarded damages and the damages awarded varied substantially. Federal agency regulatory analysis. STPI also reviewed estimates of the value of a “statistical life” that federal agencies use in the analysis of proposed regulations as a possible basis for the cost-of- casualty amount. However, FAA officials stated that this method is not suitable because these estimates are based on people’s willingness to pay for safety, and the estimates do not necessarily reflect the losses from casualty settlements or legal judgments that would be expected from commercial space launch accidents. Inflation adjustment. The final method for updating the cost-of- casualty amount that STPI reviewed was to simply adjust the existing cost-of-casualty amount for inflation using the Consumer Price Index. However, FAA officials noted that they do not know whether settlements and judgments for casualties have increased at the same rate as inflation, and thus an inflation-adjusted amount may be too high or too low. FAA officials said they are still considering how to overcome these challenges. FAA officials said that they are not planning to make additional attempts to access insurance data on airplane accident damage awards at this time, because STPI considered enough options for collecting these data that they believe additional attempts would be unproductive. Officials said they plan to use the information collected by STPI, despite its limitations, as well as any additional information the agency may gather, to reach agreement within the agency for revising the basis for the cost-of-casualty amount, though officials do not have a detailed methodological approach. Once FAA has developed a revised basis for the cost-of-casualty amount within the agency, officials said their next step would be to propose this amount for public comment. Officials said that this step is necessary to obtain input on whether and how to revise the amount and help ensure that the revised amount would not place too much financial burden on launch companies, thus disrupting the industry. FAA officials said they may propose a revised cost-of-casualty amount in the Federal Register or use other methods to request public input on the proposal. For example, officials said they may seek input on a proposed amount from FAA’s committee of industry advisors, the Commercial Space Transportation Advisory Committee. However, agency officials have not yet determined how to obtain public input or identified specific time frames for proposing a revised cost-of-casualty amount. Federal internal control standards require that agency management identify, analyze, and respond to risks related to achieving the entity’s objectives, and use current data. These standards also require that agency management define how to achieve objectives and the time frames for achieving them. However, while FAA has hired a contractor to study the cost-of-casualty issue, it has not responded to the risk presented by using outdated data as the basis of the cost-of-casualty amount. Further, because FAA’s contractors have concluded that the cost-of-casualty amount is likely too low, the current MPL calculation may not account for all damages to third parties and federal government property and personnel that can reasonably be expected to result from a launch accident, as required by FAA regulations. An MPL methodology that does not account for all damages that can reasonably be expected could cause the government to be liable for some of those damages. This would not align with the mandated considerations of the CSLCA-required FAA review, which includes helping to ensure that the federal government is not exposed to liability risk for more damages or losses than can be reasonably expected or intended. To achieve this purpose, Congress directed the Department of Transportation, which includes FAA, to determine whether the MPL calculation needs to be revised and to develop a plan for any necessary revisions by May 2016. However, FAA’s identified steps to update the cost-of-casualty amount remain incomplete because the agency has not prioritized this issue. FAA officials said that they have prioritized other work, such as reviewing launch license applications, ahead of addressing the weakness in the cost-of-casualty amount. They also noted that they did not want to delay the implementation of other revisions in the MPL methodology while they reviewed the cost-of-casualty issue, indicating that those revisions were also a higher priority. Although FAA has faced challenges in accessing sufficient data to use as a basis for updating the cost-of-casualty amount, by not prioritizing this weakness FAA may be exposing the federal government to excess risk. By continuing to use the $3 million cost-of-casualty amount in its MPL calculation methodology that we and others have noted is outdated, FAA may not be requiring launch companies to have sufficient insurance to cover all losses that can be reasonably expected. For example, if a cost- of-casualty amount based on more current data were set twice as high as the existing $3 million amount, then industry insurance requirements would cover only half of all losses that could reasonably be expected (see table 1). If launch companies’ insurance requirements do not cover all reasonably expected losses, the federal government will be exposed to more risk than intended under the indemnification regime and may be liable for some damages that should be covered by the launch company’s insurance in the case of a launch or reentry accident. FAA’s mission includes promoting the development of the commercial space launch industry as well as managing risk to the public and the federal government. FAA has taken steps to address weaknesses in some parts of its MPL calculation, which have tended to reduce the amount of insurance coverage that launch companies are required to have. However, FAA has not yet addressed the weakness identified in the $3 million cost-of-casualty amount and does not yet have a fully developed plan to do so, which would include time frames for taking action. While there is substantial uncertainty in the MPL calculation, the use of outdated data as the basis of the cost-of-casualty amount represents a risk that the current MPL calculation may not account for damages to third parties and federal property and personnel that can reasonably be expected from a launch accident, as required by FAA regulations. As a result of this unaddressed weakness in the cost-of- casualty amount, FAA may not be requiring launch companies to hold enough insurance, which, as a result, may expose the government to more risk than intended. To help ensure that the government is not exposed to more liability risk than intended, the Secretary of Transportation should ensure that the FAA Administrator prioritizes the development of a plan to address the identified weakness in the cost-of-casualty amount, including setting time frames for action, and update the amount based on current information. We provided a draft of this report to the Department of Transportation for its review and comment. We also provided relevant excerpts to the agency’s contracted expert ACTA Inc. for technical comment. The Department of Transportation did not comment on the findings or recommendation, but provided technical comments that we have incorporated into the report, as appropriate. ACTA Inc. also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of the Department of Transportation. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions or would like to discuss this work, please contact Alicia Puente Cackley at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix I. In addition to the contact named above, Jill Naamane (Assistant Director), Jeremy Conley (Analyst-in-Charge), Stephen Robblee, Jessica Sandler, Jennifer Schwartz, Joseph Silvestri, Jena Sinkfield, and Shana Wallace made key contributions to this report.
To assist in the development of the commercial space launch industry, the federal government shares liability risks for losses from damages to third parties or federal property. This risk-sharing arrangement requires space launch companies to have a specific amount of insurance for damages to third parties and federal property. The federal government is potentially liable for third-party claims above that amount, up to an estimated $3.1 billion in 2017, subject to appropriations. The Commercial Space Launch Competitiveness Act enacted in 2015 required the Department of Transportation—of which FAA is a part—to study the methodology used to determine launch companies' insurance requirements. The law also contains a provision for GAO to evaluate the study's conclusions and any planned revisions. This report discusses the extent to which FAA has revised its methodology for calculating insurance requirements to address previously cited weaknesses and the potential effect of any changes on financial liabilities for the government. GAO reviewed documents from FAA and its contractors on alternative methods for calculating insurance requirements, interviewed FAA officials and a contractor involved in designing alternative methods, and reviewed GAO's prior work and relevant laws. The Federal Aviation Administration (FAA) has revised its method for calculating insurance requirements to address some known weaknesses. FAA is the part of the Department of Transportation that determines the amount of insurance that commercial space launch companies must purchase to cover damages from accidents that harm third parties—that is, the uninvolved public—or federal property and personnel, unless companies otherwise demonstrate sufficient financial resources to cover the same calculated damages. The amount of insurance required is based on FAA's calculation of the maximum loss that can be reasonably expected. FAA contractors found the following: FAA's estimates of the number of casualties (serious injuries and deaths) that could result from a launch accident have likely been too high, and have been based on an unrealistic scenario; FAA's estimates of losses due to property damage may be too high in some cases, and too low in others; FAA's estimate of the average cost of a casualty —referred to as the cost-of-casualty amount—is based on outdated information and is likely too low. The amount has been fixed at $3 million since 1988. FAA implemented a new method for estimating the number of casualties in April 2016 that uses computer software to simulate a range of possible launch accidents that are intended to be more realistic than FAA's previous scenarios. FAA has also reduced the factor it uses to estimate losses due to property damage, based on tests of a new process for estimating such losses that showed the previous factor was too high. Both of these revisions have tended to reduce insurance requirements. In addition, FAA assigned one of its two contractors examining elements of the methodology to study potential improvements in estimating average casualty losses, but that contractor found significant limitations in each alternative approach that it reviewed. Because FAA has not yet addressed the identified weakness in the cost-of-casualty amount used in its calculation, the federal government may be exposed to excess risk. FAA has identified potential steps to update the information the cost-of-casualty amount is based on, including seeking public input on whether and how to revise the amount, but the agency does not have a complete plan for updating the cost-of-casualty amount. Federal internal control standards require that agency management respond to risks related to achieving the entity's objectives, define how to achieve objectives, and set time frames for achieving them. FAA has not responded to the risk identified in using outdated data as the basis of the cost-of-casualty amount because FAA has prioritized other work, such as reviewing launch license applications, ahead of this issue. Further, because the weakness in the cost-of-casualty amount indicates that the amount is likely too low, the current calculation may not account for damages to third parties and federal property and personnel that can reasonably be expected from a launch accident, as required by FAA regulations. By leaving this weakness unaddressed, FAA's insurance requirements may not account for damages that can be reasonably expected, and may expose the government to more liability risk than intended under the risk-sharing arrangement. FAA should prioritize planning for addressing the identified weakness in the cost-of-casualty amount and update the amount based on current information. The agency did not comment on this recommendation.
Over the years, several methods for adulterating juice have been used. Adulteration ranges in sophistication from simply diluting juice with water to adding beet sugar, the adulterant that is most difficult to detect. Introducing these ingredients is not illegal; however, knowingly selling the resulting product as pure juice constitutes fraud. Processors can increase their margin of profit or undercut competitors’ prices to increase sales by adulterating juice and selling it as 100-percent-pure juice. Although these types of adulteration provide an economic advantage (and are therefore referred to as economic adulteration), they pose little threat to the public’s health and safety. The nutritional benefits of adulterated juices are generally similar to those of their pure counterparts, and the adulterated products are usually considered harmless except for customers who are allergic to a substituted ingredient. Orange juice and apple juice are the most widely purchased juices for the school meal programs, as well as for consumption nationwide. For example, orange juice represents almost 45 percent of the fruit juice served by schools. For the 1994 school year, schools purchased over 98 percent of the juice they served directly from vendors, obtaining the remainder through one of the U.S. Department of Agriculture’s (USDA) commodity distribution programs. Two federal agencies—the U.S. Department of Health and Human Services’ Food and Drug Administration (FDA) and USDA’s Agricultural Marketing Service (AMS)—have primary responsibility for ensuring the quality and safety of the fruit juice served in the federal school meal programs. A third federal agency, USDA’s Food and Consumer Service, is responsible for setting minimum nutrient requirements for the meals served in the school meal programs. In addition, the Department of Justice (Justice) prosecutes companies and individuals suspected of adulterating fruit juice products. FDA has oversight and regulatory responsibility for domestic and imported food products sold in interstate commerce. To ensure that foods are safe, wholesome, and honestly labeled, FDA monitors the food industry, including fruit juice processors, by periodically inspecting production facilities and occasionally sampling and testing products. FDA’s standards identify the sweeteners that may be added and specify certain labeling requirements and maximum levels of water (expressed as minimum solid contents) that juice may contain. FDA investigates companies suspected of violating these standards and refers cases to Justice for criminal fraud prosecutions. AMS has responsibility for inspecting and grading food products. The agency grades the quality of juice on the basis of such factors as appearance, color, flavor, aroma, and defects, as well as the level of water, acid, and oils in the juice. AMS grades products sold to USDA’s commodity programs and provides fee-for-service inspections to companies that want AMS to certify other food products. However, there is no federal requirement that AMS inspect or grade the juice that schools purchase directly from vendors for the school meal programs. The Food and Consumer Service has responsibility for administering the child nutrition programs sponsored by USDA. The agency subsidizes the cost of school meals and sets nutritional standards for the meals. These standards require schools to serve fruit, vegetables, or pure fruit juice on a regular basis. Although schools may serve “juice drinks” that are less than 100 percent juice, only pure juice meets the standards for nutrients established by the Food and Consumer Service. For example, to satisfy the school breakfast requirements, schools may serve a 1/2-cup portion of 100-percent-pure juice. Serving a 1/2-cup portion of 50-percent-pure juice would satisfy only half of the breakfast requirements. The agency also distributes federal commodities, including a relatively small amount of juice, to schools. To protect the government’s interests in the event that a vendor has been convicted of fraudulent practices, such as juice adulteration, the Food and Consumer Service has the authority to administratively suspend for up to 1 year or debar for up to 3 years a company or individual from selling to the government or government programs, such as the school meal programs. Justice’s Office of Consumer Litigation and U.S. Attorneys’ offices prosecute fruit juice processors for violating federal fraud statutes. The Office of Consumer Litigation also forwards information on convicted companies and individuals to the Food and Consumer Service for possible debarment actions. The extent to which fruit juice purchased under the federal school meal programs is adulterated cannot be determined precisely at this time. Generally, AMS’ and FDA’s inspections are not designed to detect economic adulteration, and current tests cannot effectively detect adulteration at levels below 10 percent. In addition, schools do not take steps to determine whether the juice they purchase is adulterated. Although comprehensive data are not available, industry officials believe that the adulteration of apple juice is insignificant. However, on the basis of the testing that has been conducted for orange juice, FDA, USDA, the Florida Department of Citrus (FDOC), and private laboratories estimate that 1 to 20 percent of the supply is adulterated. AMS’ inspections are designed to grade the product, and FDA’s inspections are designed primarily to identify unsanitary conditions in food-processing plants. These agencies do not routinely inspect all U.S. juice-processing plants, which number over 500. Most inspections do not include tests for adulteration. Moreover, the tests for adulteration performed by these agencies and by private laboratories have limitations, and most are expensive. AMS’ inspections are not designed to detect adulteration, but rather to grade products in accordance with AMS’ standards. Lot inspections, which look only at juice and not at a plant’s operations, can allow illegal activities, such as adulteration, to occur in the plant without being observed by the inspector. According to AMS officials, even during an in-plant inspection, which looks at both juice and a plant’s operations, an inspector can identify adulteration only if a company is blatant in its actions and the inspector observes unusual piping arrangements or substances commonly used as adulterants. AMS’ inspections are mandatory only for juice produced for USDA’s commodity distribution programs and for juice processed in Florida. Therefore, many juice processors in the United States do not have their products inspected by AMS. In fiscal year 1995, for example, AMS inspected the operations and juice products of 130 plants. (See app. I for additional information on AMS’ inspection services and costs.) AMS officials told us that the routine tests done to grade juice products, such as determinations of acid levels and solid contents, might identify some potential adulteration. However, these officials emphasized that such tests are not designed to identify potential adulteration. In a special agreement with the state of Florida, AMS also tests frozen concentrated juice coming into the state for economic adulteration, among other things. However, this juice accounts for only 15 percent of the juice processed in Florida. FDA’s inspections of juice plants are likewise not specifically designed to detect the economic adulteration of juice. These inspections are instead done primarily to determine if a juice plant engages in good manufacturing practices. As a result, such inspections focus on sanitary conditions at the plant, as determined primarily by the inspector’s visual observations.Furthermore, FDA does not inspect all juice plants each year. From 1992 to 1994, FDA inspected no more than 20 percent of the nation’s over 500 juice plants in any one year. In fiscal year 1992, FDA (and its contractors) inspected 77 juice plants; in fiscal year 1993, 70 plants; and in fiscal year 1994, 102 plants. Although FDA can test juice for adulteration at several of its laboratories, FDA officials told us that juice samples are not routinely collected for analysis. If an inspector observes something suspicious that might indicate a product is being adulterated, products may be selected for laboratory analysis to determine whether they comply with FDA’s standards for fruit juice. For example, adulteration was detected by one inspector who happened to observe an employee adding other ingredients to orange juice. Even the presence of inspectors in the plant during processing does not preclude adulteration. Two companies convicted of adulterating juice since the mid-1980s had their juice inspected by AMS and FDA while they were adulterating it. One company had been adulterating juice while AMS was inspecting its plant and certifying its juice as USDA Grade A. The adulteration went undetected because it occurred at night when the AMS inspector was not in the plant. The company also passed an inspection by FDA during the same period. Another company was also inspected by AMS and FDA but evaded detection by elaborately modifying the structure of its plant to hide a sophisticated piping system through which beet sugar was added to orange juice. With today’s testing technology, it is not possible to detect all adulterants in juice. Most experts agree that current tests cannot effectively detect adulteration levels below 10 percent. Tests that examine the sugars in the juice to determine their authenticity appear to be the most sensitive. These tests can generally detect adulteration rates as low as 10 to 20 percent. Government and industry experts agree that a battery of tests is needed to verify that nothing has been added to or substituted for pure juice. The costs of analyzing juice samples for adulteration range from $15 for a basic test to identify dilution with water, to $700 for a test to identify the presence of pure beet sugar. A battery of tests to detect adulterants in orange juice and apple juice, excluding the most sophisticated test to detect pure beet sugar, can cost from $450 to $800. Currently, only a few government and private laboratories in this country can conduct such a complete analysis for authenticity, and the most sophisticated test used to detect beet sugar is not currently available in the United States. (See app. II for additional information on tests and costs.) Few if any school districts take steps to ensure that the juice they purchase is free from adulterants. Food and Consumer Service officials told us that schools rely on government regulators and assume that juice processors are complying with federal and state laws. Sixteen of the 18 school districts we contacted required 100-percent-pure juice to meet the Food and Consumer Service’s nutritional standards, but none of these districts took additional steps to ensure that the juice they purchased met this specification. Instead, the schools relied primarily on the integrity of the vendor and the product label. Six districts also required that their juice be graded by AMS to further ensure its quality. Estimates of the extent to which juice is adulterated vary according to the kind of juice and the type of customer. Government and industry experts said that in their experience, adulteration rates for apple juice are insignificant. However, estimated rates of adulteration for orange juice (based on limited testing by government and private laboratories) range from a low of 1 percent for juice sold in the retail market to as high as 20 percent for juice sold to institutions, such as schools. The lower end of this range comes from retail and institutional testing by AMS’ and FDOC’s laboratories. For example, FDOC found that about 1 percent (16 out of 2,503) of the juice samples that it tested from August 1992 to April 1995 were adulterated. The higher end of the range comes from testing by private laboratories and FDOC of juice sold in the institutional market. For example, FDOC found, under its program for monitoring adulteration, that from August 1992 to April 1995, 18 percent of the samples destined specifically for the institutional market were adulterated. Government and industry experts believe that the institutional market, which is generally the source of juice for the school meal programs, is more vulnerable to adulteration than the retail market because juice is less likely to be tested in this market. (See app. III for more information on estimates.) According to government and industry experts, the incentive for adulteration increases when the price of orange juice rises. FDOC’s monitoring program found, for example, that when juice supplies were down because of the freezes during the early 1980s, orange juice was extensively adulterated with pulpwash and diluted with water. In 1981, 50 percent of the samples analyzed in accordance with Florida’s standards were adulterated with pulpwash, and in 1984, 32 percent were diluted with water. FDOC officials attributed the high adulteration rates to the reduced supplies of frozen concentrated orange juice and higher prices, which created an economic incentive to adulterate orange juice. In contrast, the supply of concentrated orange juice is currently high, and the prices for it are low. Industry figures show that the total volume of domestically produced and imported orange juice increased by 31 percent from 1990 to 1994. During the same period, the annual average price of frozen concentrated orange juice decreased by 41 percent. Hence, FDOC officials believe that the incentive to adulterate has decreased and the level of adulteration is comparatively low. Since the mid-1980s, Justice has successfully prosecuted six of seven fruit juice adulteration cases. These successful actions resulted in fines and settlements of more than $11 million and prison sentences for company employees of up to 104 months, as table 1 shows. Collectively, 32 employees were convicted of felonies or misdemeanors. Federal prosecutors believe that the fruit juice industry is less subject to fraud today than it was 10 years ago because of the publicity surrounding these prosecutions and the significant prison sentences imposed by the courts. Estimates of the magnitude of the fraud associated with the individual prosecuted cases ranged from about $2 million to $37 million and represent the difference between the processor’s costs for pure juice and for adulterated juice. However, these estimates may understate the true magnitude of the fraud because they are based on available company records, which may be incomplete. For example, prosecutors said the adulteration scheme in one case probably began in 1979, but the $10.3 million estimate of the fraud’s magnitude was based on company records that were available only from 1985. In addition, since prosecutors base these fraud estimates on reduced costs to producers, the ultimate impact on customers is not known. Instead of using differences in retail prices, prosecutors use differences in the cost of ingredients to estimate the magnitude of the fraud because they generally lack complete information on who purchased the juice and at what price. Prosecutors said the impact on customers is further obscured by the widespread practice of repackaging and distributing adulterated juice products under several brand names at different prices. For example, adulterated concentrated juice in one fraud case was sold over a period of several years under more than 20 brand names in at least 29 states. The Food and Consumer Service has initiated debarment actions against companies or former employees in four of the six successfully prosecuted cases. Collectively, it has debarred or is debarring 21 individuals associated with these cases and all three companies that remain in operation. Debarment actions were not taken in two cases prosecuted before the agency was authorized to take such actions in March 1989. Government and industry officials identified two primary options for enhancing the detection of adulterated fruit juice: conducting in-plant inspections and instituting a juice-testing program. Officials agree that inspections alone, no matter how comprehensive, cannot effectively identify juice adulteration. Most experts believe that juice testing, used in conjunction with definitive purchasing specifications for juice, would enhance the ability of the federal government and school districts to detect adulterated juice. Both of these options, however, would be costly. Industry officials have proposed that all processors selling juice to schools for the federal school meal programs be subject to in-plant inspections by USDA. Under these inspection programs, inspectors would be located in the plants to observe processing and packaging operations and to sample juice for use in grading the product. The presence of such an inspector might serve as a deterrent by making it more difficult for adulterators to receive shipments of adulterants, store them on the premises, and add them to the juice during processing. However, as previously mentioned, in-plant inspections are costly and do not always detect adulteration. USDA’s inspections would be labor-intensive and, according to the agency, would require it to hire at a minimum between 30 and 40 more staff. These inspections, charged to the juice processor, cost about $40 per hour. For example, USDA’s in-plant inspections currently cost one plant selling a medium volume of juice (i.e., about 250,000 gallons) destined for schools about $8,538 per year. Some industry and government officials believe the added costs would place an unfair burden on small juice companies with fewer resources. A portion, if not all, of these added costs would likely be passed on to the school districts in the form of higher juice prices. Alternatives to observations made by in-plant inspectors are systematic or risk-based programs for testing fruit juice sold to schools. Many experts believe that school purchase contracts with definitive specifications for juice, combined with either of these two forms of testing, would reduce the likelihood of schools’ purchasing adulterated juice. Systematic and risk-based juice-testing programs differ significantly in that systematic programs test a set number of samples at a set frequency, while risk-based programs vary the number of samples and the frequency of the testing with the estimated risk. Since the number of samples and the frequency of the testing can be reduced when the risk is thought to be low, programs based on specific risk factors tend to be less expensive than systematic programs. Under a risk-based testing program, school purchase contracts would include definitive specifications for juice and a provision for random testing that would form the basis for rejecting substandard juice. Such an approach would call for increasing the frequency of sampling and testing when certain high-risk factors were present. High-risk factors could include bids that were significantly below market; suspicious results from federal, state, or other monitoring programs; referrals from industry; or unusually high prices for concentrated juice that would presumably increase the economic incentive to adulterate juice. Although testing against juice specifications could reduce a school district’s risk of buying adulterated juice, such testing would likely increase the cost of juice significantly. For example, the cost of purchasing fruit juice for the 1993-94 school year in the school districts we examined ranged from $7,600 in a small district to $240,000 in a large district. According to our calculations, the annual cost of testing for that period would have averaged about $6,400 per district if each district had tested only one sample of juice each quarter. Thus, for that 1-year period, the cost of testing in the small district would have been almost 84 percent of the entire cost of purchasing the juice. Many officials believe that if industry were required to incur the cost of testing, this cost would most likely be passed on to schools in the form of higher juice prices. The federal government would also incur additional costs in implementing and administering such testing programs. Such costs would include those that the Food and Consumer Service and FDA would incur in developing a set of definitive juice specifications, disseminating these specifications to the school districts, and educating the districts about the new testing program. Because of the many factors involved in these actions, we did not attempt to determine these costs. We provided copies of a draft of this report to AMS, the Food and Consumer Service, FDA, and Justice. We met with AMS’ Deputy Director, Science Division, and Deputy Director, Fruit and Vegetable Division; the Food and Consumer Service’s Branch Chief, Program Analysis and Monitoring Branch, Child Nutrition Division; and FDA’s Director, Executive Operations and Consumer Safety Officer, Office of Plants, Dairy Foods and Beverages, Center for Food Safety and Applied Nutrition. These officials generally agreed with the factual accuracy of the report and made suggestions for technical revisions, which we incorporated as appropriate. We also discussed the report with Justice’s Assistant Director, Office of Consumer Litigation, who said the report presents a balanced picture of the economic adulteration of fruit juice in this country and accurately reflects Justice’s successful prosecution of cases in this area. He also made suggestions for technical revisions, which we incorporated as appropriate. We conducted our work from March through October 1995 in accordance with generally accepted government auditing standards. Details on our objectives, scope, and methodology appear in appendix IV. We are providing copies of this report to the appropriate congressional committees, interested Members of Congress, the Secretaries of Agriculture and Health and Human Services, the Attorney General, and other interested parties. We will also make copies available to others on request. If you have any questions, please contact me at (202) 512-5138. Major contributors to this report are listed in appendix V. The U.S. Department of Agriculture’s (USDA) Agricultural Marketing Service (AMS) offers food producers three types of inspection services, as table I.1 shows. USDA requires that fruit juice processors agree to being inspected by AMS if they want to participate in USDA’s commodity distribution programs. Otherwise, inspections are optional for juice processors. Number of juice plants inspected (FY 1995) Only those products sold to the government (the plant itself is not inspected) Several tests are available for detecting the various types of adulterants in fruit juice. The costs of these tests range from $15 to $700. However, the tests generally cannot detect adulteration levels below 10 percent, as table II.1 shows. Comprehensive or statistically valid data are not available on the extent to which orange juice is adulterated, but government and private laboratory officials’ estimates ranged from 1 to 20 percent. The lower estimates were for orange juice sold to retail customers, and the higher estimates were for orange juice sold to institutions, as table III.1 shows. Table III.1: Estimates on the Extent of Orange Juice Adulteration Florida Department of Citrus (FDOC) Food and Drug Administration (FDA) Testimonial evidence based on laboratory experience. Testimonial evidence based on laboratory experience. Testimonial evidence based on laboratory experience. Testimonial evidence based on laboratory experience. Testimonial evidence based on laboratory experience. Testimonial evidence based on laboratory experience. The Congress mandated in the Healthy Meals for Healthy Americans Act of 1994 that we review the costs and problems associated with the sale of adulterated fruit juice to the school meal programs. Subsequent discussions with your offices refined the scope of the mandate into the following questions: (1) What are the nature and extent of the juice adulteration problem in the federal school meal programs, and can current inspection and testing methods detect adulteration? (2) What recent federal enforcement actions have been taken against juice adulterators, and what was the magnitude of the fraud? (3) What are the options for enhancing the detection of juice adulteration, including mandatory inspection of juice plants that sell their products to the federal school meal programs? To determine the nature and extent of the adulteration problem and the ability of current inspection and testing methods to detect adulteration, we contacted officials from the Food and Drug Administration’s (FDA) Center for Food Safety and Applied Nutrition and Office of Regulatory Affairs, the U.S. Department of Agriculture’s (USDA) Food and Consumer Service and Agricultural Marketing Service, and the Florida Department of Citrus, as well as academic experts and officials from private laboratories involved in testing juice for adulteration. We also contacted various industry associations, including the National Food Processors’ Association, the Technical Committee for Juice and Juice Products (an independent organization), and the Apple Processors’ Association. We contacted 18 school districts in the 6 states that receive the most federal funding for school meals (California, Florida, Illinois, New York, Pennsylvania, and Texas). We reviewed FDA’s and USDA’s regulatory standards for fruit juice and USDA’s school meal requirements for fruit juice. We also reviewed relevant reports, technical publications on fruit juice testing, and data from FDA and USDA on fruit juice inspections. To determine recent federal enforcement actions taken against companies for adulterating fruit juice, we discussed prosecutions and the debarment of juice adulterators with officials from FDA, USDA, and the Department of Justice. In addition, we reviewed available literature on court cases and case files maintained by the Department of Justice. To determine the various options for detecting adulterated juice, we solicited the opinions of government and industry experts. We discussed these options with officials from FDA, USDA, the Florida Department of Citrus, state education offices, and school districts, and with industry experts, such as members of the Technical Committee for Juice and Juice Products. We did not fully analyze the cost implications of these options. Keith Oleson, Assistant Director David Moreno, Project Leader Wayne Marsh, Staff Member Jon Silverman, Staff Member Kathy Stone, Staff Member The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the sale of adulterated fruit juice to school meal programs, focusing on: (1) the nature and extent of the problem; (2) whether federal inspection and testing methods can detect juice adulteration; (3) recent federal enforcement actions taken against juice adulterators; and (4) options for enhancing the detection of adulterated juice. GAO found that: (1) juice adulterators have cut costs by adding less expensive ingredients to juice and labeling the product as pure; (2) although most school districts require that the juice it serves be 100 percent pure, they generally rely on the product label and the vendor's integrity to ensure that the juice meets nutritional standards; (3) the extent of juice adulteration is unclear, since juice plant inspections and laboratory tests are not designed to detect adulteration; (4) government and industry officials believe that apple juice adulteration is not a major problem, but as much as 20 percent of the orange juice sold to school meal programs may be adulterated; (5) the Department of Justice has convicted six juice adulterators and the Department of Agriculture has debarred three companies that remain in operation; and (6) in-plant inspections and juice testing programs are potentially effective but costly options for enhancing the detection of adulterated juice.
FBO has responsibility for managing about 11,000 leased properties and 3,000 U.S.-owned properties valued about $12 billion. These properties, at over 260 locations worldwide, include embassies and consulates, office buildings, detached houses and multi-unit residential buildings, warehouses and garages, and undeveloped land. FBO’s responsibilities include (1) overseeing the acquisition, design, construction, sales, operations, and maintenance of properties and (2) establishing policies and procedures for overseas posts to follow in managing real property programs. Since the early 1960s, we have reported serious deficiencies in FBO’s management of overseas real property. Beginning in 1990, as part of our special effort to review and report on federal programs considered high-risk, we identified the management of overseas real property as being at substantial risk for waste, fraud, abuse, and mismanagement. Our December 1992 report on overseas real property identified the chronic and long-standing management weaknesses that had affected overseas real property programs, including insufficient maintenance, lax oversight of overseas operations, inadequate information systems, and poor planning. These weaknesses resulted in deteriorating facilities and a significant backlog of maintenance and repair requirements. Management weaknesses also directly contributed to construction delays and cost overruns, oversized and unauthorized housing, poor decisions, and questionable expenditures. In our 1992 report, we noted that the State Department had recognized the significance and urgency of the problems and had planned or initiated a number of engineering, staffing, and management process actions designed to address past problems. Progress has been made in addressing the long-standing problem of inadequate maintenance and rehabilitation of overseas facilities. In early 1993, we sent a questionnaire to the State Department’s overseas embassies addressing several management issues. Almost all of the 80 embassies responding to the questionnaire said that they had received services or review visits from FBO and/or one of the regional maintenance assistance centers, and the majority of the respondents were generally or very satisfied with the services received. In September 1993, the State Department’s Inspector General reported that FBO had made progress in addressing weaknesses in its repair and maintenance of overseas property. Progress cited by the Inspector General included a new group of skilled maintenance professionals in the State Department responsible for overseeing post maintenance and repair operations, a systematic global facility review program to assess the condition of U.S.-owned and long-term leased facilities, a 5-year plan for major rehabilitations, a comprehensive maintenance plan for newly constructed buildings, and additional funding for maintenance programs. Other initiatives have included the facilities evaluation and assistance program. This program augments the global review program and typically consists of a maintenance audit of post facilities using a standard evaluation checklist and assistance in developing more effective maintenance programs. In December 1993, the Office of Management and Budget (OMB) determined that FBO had made sufficient headway in improving maintenance management to warrant removing rehabilitation and maintenance of overseas property from its list of areas at highest risk to fraud, waste, and abuse. We agree with this decision since our work also indicates substantial progress by FBO in the maintenance and rehabilitation area. However, we have also identified continuing problems requiring corrective action. These problems include the questionable and/or inappropriate use of routine maintenance funds by overseas posts and the failure of some posts to either conduct or complete annual surveys documenting the condition of government-owned and long-term leased facilities. As part of its overall management responsibilities, FBO provides funds to each overseas post to pay for routine maintenance and repair of its real property. In fiscal year 1994, approximately $27 million was provided to the posts. Authorized uses of the funds include painting and other services/materials of a recurring or minor nature, the purchase of bulk supplies such as lumber and nails, and projects that would otherwise qualify as special maintenance or minor improvement projects. The policies and procedures allowing the use of routine maintenance funds for special and minor improvement projects were identified in an April 1993 cable to all posts, giving post management the latitude to execute small special maintenance and improvement projects as required. However, to ensure that posts use most of these funds for routine maintenance, no special maintenance or minor improvement project using routine maintenance funds may exceed $10,000 and no more than a total of 5 percent of a post’s annual routine maintenance funds may be used for special projects in a given year. Although FBO has issued guidance to the posts on the proper uses of routine maintenance funds, our review of posts’ financial records shows that inappropriate and/or questionable uses of funds have occurred frequently. These have included the use of routine maintenance funds for (1) special and minor improvement projects costing more than $10,000 and (2) special and minor improvement projects totaling more than 5 percent of the post’s annual routine maintenance budget. In some cases, posts used routine maintenance funds for nonmaintenance and repair purposes. Reasons for the misuse of funds include the failure of posts to follow FBO guidance, some ambiguities in the guidance and related State regulations, and FBO’s failure to hold posts sufficiently accountable for improperly using funds. Examples of using routine maintenance funds for special projects costing greater than $10,000 included the repair and resurfacing of a tennis court at the ambassador’s residence in Santo Domingo; construction of storage lockers and hard covering over a swimming pool, and new windows in London; and extension of a tubular mailing system, facade renovation, installation of a suspended ceiling, and grounds preparation for a volleyball court in Vienna. Routine maintenance funds were used for nonmaintenance purposes to purchase furniture for the posts in Santo Domingo and Nassau; to purchase and install communications equipment, pay for security escort services, purchase plants for the ambassador’s dining area in the chancery, and purchase flag pole stands, mirrors, and a chandelier in London; to purchase a new clock for the chancery in Vienna; and to purchase garden lighting systems and track lighting in Kuala Lumpur. According to FBO officials, such expenditures should have been charged to salaries and expenses appropriations or to other FBO accounts, such as the furniture, furnishings, and equipment program. The post in Kuala Lumpur obligated over $10,000 in routine maintenance funds for renovating offices in the chancery 1 day before the end of fiscal year 1994. That obligation (1) represented a potential misuse of routine maintenance funds and (2) appeared to represent an effort to fully obligate remaining funds (year-end buying) instead of returning them to FBO. In some cases, expenditures were questionable because State and FBO policy guidance is unclear. For example, routine maintenance funds have been used for numerous gardening-related activities at the ambassador’s residence and other properties in London, totaling over $10,000 in fiscal year 1994. Items purchased included rose bushes, shrubs, seeds, compost, and flower pots. We believe these uses are questionable because, according to State’s Foreign Affairs Manual (FAM), grounds care for residences occupied by the ambassador or chief of mission should be charged to salaries and expenses appropriations. However, embassy officials in London identified what they considered to be potentially conflicting guidance in another part of the FAM, which defines routine maintenance and repair as activities done for the continuing upkeep of buildings and grounds. FBO officials said that State’s guidance concerning which appropriations should be used for gardening expenses is unclear. Our analysis and discussions with post officials indicates that State’s guidance is also unclear in other areas, including the appropriateness of using routine maintenance funds for such things as the replacement of carpets. The misuse of routine maintenance funds is not a new problem. Beginning in 1990, FBO’s financial audits determined that several overseas posts had not used routine maintenance and repair funds for intended purposes. Similarly, the September 1993 State Inspector General report on maintenance and repair programs noted that all eight of the posts included in that assessment had used routine maintenance funds for improper and/or questionable purposes. According to the FBO and Inspector General assessments, posts improperly used routine maintenance and repair funds on projects that should have been funded as special projects or minor improvements or from the salaries and expenses appropriation. The Inspector General also found that posts used funds for questionable activities in part due to inadequate State guidance. One of the key FBO requirements for a successful ongoing maintenance program is the annual facility condition survey. According to FBO’s Facility Maintenance Handbook, the overseas posts should survey all government-owned and long-term leased facilities annually for structural, electrical, and mechanical deficiencies. According to FBO, the surveys are necessary for determining required maintenance and repair work at a post and for allowing management decisions on budget priorities. The failure of overseas posts to properly conduct annual surveys has historically been a problem in the Department’s maintenance system. In our 1990 report on maintenance management, we reported that none of the 14 posts included in that review had conducted annual property condition surveys. In our 1993 report on administrative issues affecting overseas embassies, we found improvements, but also noted that about 30 percent of the embassies responding to our questionnaire acknowledged that they had not conducted annual condition surveys. Recently, we found that there are still some weaknesses in the system for surveying the condition of overseas property. For example, the post in Nassau examined the condition of its property to support the fiscal year 1994 budget. However, the survey did not cover all government-owned properties or use an inspection checklist as recommended by FBO in its Facility Maintenance Guide. Embassy officials in Vienna said that a survey was not done to support the fiscal year 1995 budget because they were unaware of the FBO requirement. However, at the time of our visit, we were told that a survey was underway to support the fiscal year 1996 budget process. We also found that the post in Port Moresby had not conducted annual facility condition surveys or prepared annual inspection summaries. Effective oversight of overseas real property programs is critical to ensuring that the State Department’s policies and procedures are followed and funds are properly used. It also provides a basis for FBO to work directly with the overseas posts to improve their maintenance capabilities and strengthen other aspects of their real property programs. FBO has made progress in strengthening its monitoring capability through its financial audit program, but coverage has been limited. Some progress has also been made in the area management program. In 1990, FBO began conducting financial audits of the administration and use of funds at overseas posts with significant FBO resources. One of the major reasons for beginning the audit program was FBO’s concerns about the difficulties it encountered in (1) using the State Department’s financial management systems to track post expenditures and (2) identifying excess funds that had been provided to the posts and getting the posts to return such funds to FBO rather than use them for other purposes. At the end of fiscal year 1994, audits had been conducted at 21 overseas posts. Some posts were identified as having good financial controls over FBO accounts, including Bonn, Jakarta, Tokyo, and Cairo. However, the FBO audits identified significant financial management irregularities at other posts, including the misuse of routine maintenance funds and inadequate controls over the obligation/deobligation process. Posts identified by FBO as having financial control weaknesses included Kinshasa, New Delhi, Tel Aviv, Rome, Hong Kong, and Manila. FBO’s financial audits have resulted, both directly and indirectly, in improved oversight and administration of funds. Overall, FBO estimates that its financial audit program has resulted in nearly $4 million in uncommitted post funds being returned to FBO for use in other projects and programs. For example, based on the results of a 1990 FBO audit, the post in Mexico City (1) deobligated over $100,000 in unused funds for fiscal years 1988-89 and returned them to FBO and (2) transferred charges of about $43,000 to the salaries and expenses account that had been erroneously charged to the routine maintenance account. Although results have been noteworthy, FBO’s financial audit coverage has been limited to less than 10 percent of the foreign service posts. Without greater FBO audit coverage of the overseas posts, FBO has inadequate assurances that real property activities are consistent with State’s financial policies and that posts promptly return unneeded FBO provided funds. For example, in Nassau, we found over $60,000 in funds that FBO provided in fiscal years 1989-93 that the post had accumulated without finite plans for use. Records for the embassy in Santo Domingo showed over $200,000 in fiscal years 1989-92 funds that should have been deobligated. According to FBO officials, the post in Nassau subsequently deobligated all of the funds we identified and returned them to FBO for other uses. The post in Santo Domingo has deobligated only about $20,000. It has not yet taken any actions on about $185,000 of funds because post officials said the obligating documents identifying the purpose of the intended expenditures could not be located. FBO’s area managers have primary responsibility for monitoring overseas post activities and ensuring that post actions are consistent with FBO’s policies and procedures. In the past, area managers did not give their monitoring responsibilities sufficient priority. Our analysis of recent trip reports by FBO’s area managers indicates that their coverage was often comprehensive, but the quality of monitoring varies significantly by individual managers. In some cases, area managers (1) either did not prepare or could not locate their trip reports documenting the status of post controls or (2) had not completed FBO’s standard checklists. FBO’s checklists also have some limitations. For example, they do not require area managers to determine if posts conducted annual facilities condition surveys or if the surveys were done consistent with FBO’s guidance. In addition, the checklist only requires area managers to spot-check obligation backup documents to ensure funds are properly used. Moreover, in 1993, the State Department’s Inspector General found that three of the posts it visited had used routine maintenance funds improperly even though FBO’s area managers had recently spot-checked the posts and had identified no problems. FBO officials acknowledge that audits of posts’ use of maintenance funds are important, but question expanding the role area managers have in helping to ensure compliance with financial procedures. FBO officials said that (1) limited resources do not permit area managers to perform anything but a check on post operations and (2) an audit function is not appropriate for area managers. We agree that area managers should not become auditors. However, our review indicates that there are opportunities to make area manager’s spot-checks of financial activities more beneficial. For example, prior to their visit, area managers could ask the post to assemble purchase orders supporting its obligations of routine maintenance funds for the most recently completed fiscal year. The area managers could then quickly review the orders at the post to (1) determine if the use of funds has been consistent with FBO’s procedures and (2) refer any problems to post management for corrective action. This could help underscore the importance of posts’ compliance with financial requirements. Progress has been made in strengthening FBO’s planning capabilities. In our December 1992 high-risk report, we noted that poor planning was one of FBO’s fundamental management weaknesses, directly contributing to project delays, cost increases, and questionable real estate decisions at overseas posts. At that time, FBO’s approach to strengthening planning called for (1) matching each post’s short- and long-term requirements with existing assets and (2) outlining budgeting and staffing needs on a 5-year basis. However, FBO had not established milestones for developing facilities plans at key posts or included in the 5-year plan the potential proceeds from sales of properties as potential offsets to costs in other areas. FBO has subsequently taken several actions to improve its capability to plan for real property programs, including expanding the Planning and Program Division from 6 to 17 staff members and conducting a formal analysis of all overseas posts to determine each post’s candidacy for future construction and/or major renovation projects. Facilities plans have been prepared for 12 posts, and at the time of our review, facilities studies were underway at 9 overseas posts. FBO also plans to revise its policy on facilities planning. The planned revision calls for (1) at a minimum, developing baseline planning data for each overseas post and (2) developing in-depth facilities plans for posts having greater planning requirements. Inadequate real estate planning at the post level continues to be a problem. For example, we found that State has kept undeveloped properties in Nassau without adequate justification or plans for their use. These properties are: Saffron Hill, a 1.95-acre vacant lot adjacent to the ambassador’s residence, used primarily as an overflow parking lot during official receptions at the residence. The lot was acquired in 1959 at a cost of $32,000 and its current value is not identified in FBO’s real estate management system. According to October 1993 post documents, its intended and potential use was unknown. Another Saffron Hill property, a 0.6-acre vacant lot across the street from the ambassador’s residence. The property was acquired in 1975 for $5,000 and has not been used for any official purposes. Its current value is not identified in the real estate management system. According to October 1993 post documents, its intended and future use was unknown. Office Building Chancery (OBC) site, 11.12 undeveloped acres obtained in 1975 as part of a tax settlement. Originally intended to be used for construction of a new embassy, its future use is unknown because of State’s decision to purchase the existing embassy, which had been part of a long-term lease arrangement. Directly across the street from the OBC site is another government-owned property. This ocean-front property is 0.71 acres and was also acquired as part of a tax settlement. Post officials could not identify its intended or future use. Although the need to dispose of the OBC site and the property across from it was recognized in 1993, FBO and the post have failed to properly manage the disposition process. Specifically, the post received two appraisals in 1993 for the OBC site and the ocean-front property near it, ranging between $900,000 and $2.5 million. However, according to FBO officials, the discrepancies in appraisal value and the lack of supporting documentation and data in the appraisals prompted FBO to request additional information from the post in February 1994. FBO officials said that no response was received and a follow-up cable was sent in June 1994 and again in November 1994. According to FBO, the market value of these properties needs to be accurately determined before they can be sold. Appraisals for the other two undeveloped properties near the ambassador’s residence had not been conducted at the time of our fieldwork. According to information provided by FBO in November 1994, the property adjacent to the ambassador’s residence was not considered excess to government needs because it is used for overflow parking. FBO also stated that the other parcel was held pending the identification of “high return” purchases the post could enter into and fund with the proceeds of sale of that property. However, in March 1995, FBO officials said that neither of these two lots had been disposed of because the post reported that both are required for overflow parking at the ambassador’s residence during official functions. Parking may be an issue at the ambassador’s residence when official receptions are held, but the post’s stated reasons for the retention of two properties for overflow parking should be based on a comparison to the properties’ potential disposal value. On a larger scale, State previously included in its annual budget requests the potential proceeds from the sale of properties as an offset to other funds being requested. However, FBO officials have now taken the position that it is not practical because of the long time frame required for development of budget estimates, the volatile nature of overseas real estate markets, and the complexities of marketing high-value properties. We are currently reviewing FBO’s policies and procedures for disposing of overseas real property that does not meet U.S. government needs and for using the proceeds from property sales. FBO has taken actions to upgrade and expand its information systems. In our 1992 high-risk report, we pointed out that FBO’s real estate management system did not fully support management functions because it did not provide historical cost information or maintenance and repair information for each building overseas. As a result, FBO could not (1) track costs for each building, (2) determine the total costs of operating properties, (3) budget for future building costs, or (4) develop performance measures for property management purposes. At the time of our 1992 report, FBO had begun installing an enhanced version of the automated real estate management system at the overseas posts. Since then, the enhanced system has been installed at 71 posts, providing automated data on approximately three-fourths of the real property inventory. Current plans call for installing the system at most remaining posts in the next few years. FBO has also developed an information resource management system, which consists of several integrated applications, including project management and budget planning and allocation. Although these systems have helped to strengthen FBO and post capabilities to report on and manage real property, some weaknesses continue. As already noted, the real estate management system does not contain complete and current information on the value of overseas property. We found that although FBO has developed an automated work order component of the system that can track maintenance costs for individual buildings, some posts have not fully or correctly utilized it. For example, the automated work order system at the post in London was not fully operational and contained inaccurate inventories of equipment requiring preventive maintenance. In Vienna, management oversight was unnecessarily complicated because the work order process was split into two separate automated systems—the real estate management system for residential units and other properties outside the embassy compound and a system developed by the post for managing maintenance at the embassy and the consulate. In Nassau, we found that the PC-based system installed by FBO to facilitate the use of a work order program was not used. In Singapore, the post’s use of the work order system was limited to monitoring landlord maintenance, and the equipment inventory had not been updated for years. The automated real estate management system also does not track or report on compliance of overseas residences with the State Department’s housing space standards. This reduces State’s ability to monitor the extent individual posts are (1) effectively managing housing assignments and (2) minimizing the number of housing units that exceed space standards. About 88 percent of the embassies responding to our 1993 questionnaire reported that some housing units at their embassy exceeded the State Department’s residential housing space standards, and 61 percent reported that 10 percent or more of the housing units exceeded standards. Sixty-two percent estimated it would take 2 years or more to be in compliance with the standards. We found that some problems are still being experienced in meeting those standards. In Vienna, for example, post officials estimated that nearly 25 percent of the residential units exceed space standards. FBO officials said that it may be accurate to state that certain residences in Vienna exceed the standards for the current occupant, but noted that the housing profile system used by FBO allows such assignments if the properties are within the post’s approved profile. However, the automated real estate management system did not identify the eight housing units that were outside the approved profile and the post had not applied for or received FBO waivers for exceeding the standards. FBO officials acknowledged that the system cannot currently identify out-of-profile properties that do not have waivers. However, they said that the system can now identify approved waivers and with further instructions to the posts, waivers in relation to profiles can be tracked. FBO officials added that the next step—the generation of compliance reports and statistics related to profiles and waivers—would then be possible. FBO also recognizes that overall weaknesses in accountability over real property transactions continue. To upgrade its system capability, FBO is examining the potential for integrating data maintained in the Department’s new information financial management system and the FBO real estate management system. FBO hopes to (1) define common data requirements that can be shared between the two systems and (2) ascertain where the existing systems meet or fail to meet requirements. Complementing this effort is a planned audit by State’s Inspector General. Planned objectives of the audit include determining whether FBO’s systems contain adequate data to serve as subsidiary accounting records for real property and whether FBO has sufficient internal controls. We recommend that the Secretary of State strengthen real property management by taking the following actions: Revise and clarify State’s policies and procedures governing the use of routine maintenance and repair funds, adding a checklist of appropriate and nonappropriate charges to the routine maintenance and repair allotment. Expand FBO’s financial audit program to ensure coverage of a greater number of overseas posts and use FBO’s area manager program to have a more comprehensive check of posts’ use of routine maintenance funds. Area managers’ checklists should also be expanded to cover other key problems, including the extent posts (1) conduct annual facility conditions surveys and (2) use the automated real estate management system for work order management. Use FBO’s automated real estate management system to report on and monitor whether housing units at each post comply with existing space standards, approved housing profiles, and space waiver requirements. FBO should also be directed to use these reports to enforce compliance with State’s space standards. Develop a plan of action with associated time frames for selling unneeded property in Nassau. The plan should (1) require updated appraisals of the value of the former OBC site and the ocean-front property near it and (2) include supporting analysis and justification for retaining two properties for parking near the ambassador’s residence, compared to the potential sales proceeds that could be realized from the sale of one of the properties. Our work was conducted at FBO headquarters and at seven overseas posts. These posts were London, England; Vienna, Austria; Santo Domingo, Dominican Republic; Nassau, the Bahamas; Singapore, Singapore; Kuala Lumpur, Malaysia; and Port Moresby, Papua New Guinea. These posts were selected because none of them had been subject to an FBO financial audit and because the size of their allotments of routine maintenance funds varied significantly. We reviewed pertinent records and documents, including the FAM and FBO’s guidance on use of funds; global maintenance surveys and facilities evaluation and assistance program reports; FBO’s financial audit, real estate management system, and area management trip reports; and posts’ status of obligation reports, purchase orders, and work order reports. We also interviewed appropriate FBO and post personnel and visited several post properties. We did not obtain State Department’s comments on this report. However, informal comments on a draft of this report were received from FBO officials and were incorporated in the report as appropriate. We conducted our review from February 1994 to February 1995 in accordance with generally accepted auditing standards. As you know, the head of a federal agency is required under 31 U.S.C. 720 to submit a written statement on actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of the report and to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. Please contact me at (202) 512-4128 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix I. Dennis Richards Robert Sanchez The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of State's management of its overseas properties, focusing on: (1) the problems State faces in managing its overseas real property; and (2) how State can strengthen its overseas real property management. GAO found that: (1) many of State's improvements in overseas real property management have focused on facility maintenance; (2) State's maintenance improvements include assigning skilled maintenance personnel overseas, conducting global maintenance surveys, a 5-year major rehabilitation plan, a comprehensive maintenance plan for new buildings, additional maintenance funding, establishing maintenance assistance centers, and implementing a facilities evaluation and assistance program; (3) although State has strengthened its real property management program, significant problems still exist including questionable or inappropriate use of routine maintenance funds, and overseas posts' failure to conduct or complete annual assessments of government-owned and long-term leased facilities, deobligate unneeded funds, properly use the real estate management system to manage routine and preventive maintenance programs, and adequately plan for the sale or use of undeveloped properties in State's inventory; (4) State implemented a financial audit program in 1990 that improved its oversight of overseas posts' real property programs and resulted in the return of nearly $4 million in unused funds; and (5) State has implemented an information resources management system to strengthen its budgeting and planning process and has continued to upgrade its real estate management system, but its information systems still contain weaknesses.
SSA provides financial assistance to eligible individuals through three major benefit programs: Old-Age and Survivors Insurance (OASI)—provides retirement benefits to eligible older workers and their families and to survivors of deceased workers. Disability Insurance (DI)—provides benefits to eligible workers who have qualifying disabilities, and their eligible family members. Supplemental Security Income (SSI)—provides income for aged, blind, or disabled individuals with limited income and resources. In administering these programs, SSA provides a range of services to the public. For example, SSA calculates retirement benefits for individuals based on factors including earnings history and the age at which an individual chooses to start receiving benefits. Also, SSA staff determine whether DI and SSI benefit applicants meet the agency’s non-medical eligibility criteria. Applicants who are not satisfied with the initial decision on their claim may appeal, which can include a hearing before an SSA administrative law judge. Recipients of SSA benefits can work with the agency to manage their benefits in various ways, including obtaining letters verifying that they receive benefits, changing their address and telephone number, and starting or changing their direct deposit of benefits. SSA has other responsibilities in addition to administering benefit programs. Its mission includes issuing Social Security numbers, which are currently used for many non-Social Security purposes. Most original Social Security cards are issued at birth during the Enumeration at Birth process, which is completely electronic. SSA also issues both original cards not issued at birth and replacement cards. In addition, SSA uses and stores a great deal of sensitive information, including financial and medical records as well as Social Security numbers. Customers access SSA services primarily through four delivery channels: In-person at SSA facility: Customers can access a wide range of services at SSA’s field offices, including applying for benefits, managing benefits, and obtaining Social Security cards. Customers can also obtain Social Security cards at SSA’s card centers. Individuals who are appealing SSA’s decision on their disability applications may participate in an in-person hearing before an administrative law judge at one of SSA’s hearing offices. By phone with field office staff: Customers can access many of the services that are available in-person at field offices through phone calls with field office staff, including applying for and managing their benefits, according to SSA officials. By phone through national 800 number: Customers can manage their benefits and obtain informational services through the national 800 number. They have the option of conducting business through an automated system or by speaking directly with an SSA staff person at a teleservice center. Online: Many services are available online. Customers can apply for retirement and DI through SSA’s website. The majority of online retirement benefit applications are reviewed by SSA staff at one of 16 Workload Support Units, while most online DI applications are reviewed by staff at field offices, according to SSA officials. Also, the mySocialSecurity online portal allows customers with an account to manage their benefits and view information online such as their earnings record. See table 1 for a summary of how some of the more commonly used SSA services may be accessed. SSA’s facilities include: Its headquarters, located mainly in the Baltimore, Maryland, and Washington, D.C., metropolitan area. A network of field offices, Social Security card centers, teleservice centers, and other facilities that deliver services directly to the public. These services are managed by SSA’s Office of Operations, which is further organized into 10 regional offices around the country, each with a regional commissioner, and 51 area offices, each with an area director who reports to a regional commissioner. A network of hearing offices around the country where claimants can participate in hearings before administrative law judges. The Office of Disability Adjudication and Review (ODAR) manages the appeals process for disability applications out of its headquarters in Falls Church, Virginia, and through a network of regional offices. SSA leases all of its facilities from the GSA through occupancy agreements—a signed agreement between GSA and SSA to the financial terms and conditions for occupying a GSA-controlled space. GSA- controlled space can be in privately-owned or federally-owned buildings; approximately two-thirds of SSA’s space is in privately-owned buildings. Overall, the Office of Facilities and Logistics Management (OFLM), within SSA’s Office of the Deputy Commissioner for Budget, Finance, Quality, and Management, is responsible for planning and implementing the agency’s policies related to its physical footprint. In fiscal year 2012, SSA began centralizing its facility planning process, moving responsibility for this process from the regional offices to OFLM. This shift included establishing one signatory for all requests for office space and monitoring changes to the amount of office and warehouse space in its inventory. SSA’s facility planning is guided by factors including: (1) Leasing cycle. SSA’s typical leasing cycle, according to interviews with SSA officials as well as agency documentation, begins approximately 36 months before a lease expires. SSA begins developing alternatives for future space use, which could mean renewing the lease for the current space, moving to a new location, or adjusting the current space (e.g., consolidating or co-locating offices). Facilities officials at each SSA regional office work with area directors and field office managers to determine an office’s space needs. For field offices, regional facilities officials then apply space allocation standards (space standards) to determine the size of the office based on the current number of staff on board. After determining the space needs and applying the space standards, OFLM officials provide final approval before a request is submitted to GSA. Then GSA officials work directly with SSA regional facilities officials to complete the process for renewing a lease, including identifying potential sites if a field office needs to change location due to, for example, a shift in the service area population. (2) Service area review. SSA policy requires area directors to conduct service area reviews—which assess the need for office changes based on service delivery conditions—at least every 10 years for each field office in their area of responsibility to determine whether the service area’s needs are being met by the field office. However, area directors can decide to conduct these reviews on an ad hoc basis, such as when an office’s lease is expiring. Service area reviews consider a wide variety of factors, including demographics, workload, and the physical accessibility of the office. They can result in an internal recommendation that, among other things, an office be consolidated with another office or move location. Recommendations to change location or consolidate, made by the area director, must be approved by the regional commissioner and, as needed, headquarters officials. Headquarters officials would approve, for example, a recommendation to establish a new field office or change the location of an existing field office to a new congressional district. For approved location changes or consolidations, SSA then works with GSA as described above to identify sites for its potential use. (3) Space reduction initiatives. Two OMB government-wide initiatives also guide SSA’s facilities planning. The Freeze the Footprint (2012) and Reduce the Footprint (2015) policies were developed to help federal agencies more efficiently use their excess and underutilized properties in light of a fiscally-constrained environment and changes in how agencies conduct business, including the utilization of technology to deliver services to the public. The policies required SSA to initially avoid increasing the square footage in domestic offices and warehouses beyond a baseline established in 2012, and then to set and meet annual reduction goals. The Freeze the Footprint policy was in effect until 2015; the Reduce the Footprint policy took effect in 2016. SSA determines its annual reduction goals in consultation with GSA and OMB. In fiscal year 2015, OMB and GSA re-categorized over half of SSA’s facilities as “public-facing”—primarily used to serve and interact with the public—and exempted these facilities from its Reduce the Footprint baseline and reduction targets. As a result of this change, the fiscal year 2016 reduction target for SSA’s facilities was lowered from 260,000 to 120,000 square feet. SSA reduced its square footage and the number of its facilities from fiscal year 2012, when the overall federal effort began to limit agencies’ physical footprint, to fiscal year 2016. SSA is continuing to reduce its footprint to align with these federal efforts and to reduce costs, according to agency documents and interviews with agency officials. SSA reduced its footprint by about 1.4 million square feet (or 5.2 percent) from fiscal year 2012 to fiscal year 2016, according to our analysis (see fig. 1). These space reductions met both its Freeze the Footprint and Reduce the Footprint goals, according to SSA. SSA officials said over 600,000 square feet of its space reduction since 2012 was in field offices, with the remainder in headquarters space. According to SSA, the agency issued revised space standards for its field offices in 2012 in response to OMB’s Freeze the Footprint policy; the revised standards contributed to these reductions. The application of the revised standards as field office leases come up for renewal has helped reduce SSA’s overall field office space. While the revised standards expand the space in public reception areas, the standards’ larger reductions in space for personnel and support space reduce the overall field office footprint. For example, the 2006 space standards allocated 125 square feet per SSA staff member, which the 2012 standards reduced to 120 square feet; the 2006 standards allocated 7 square feet per filing cabinet, while the 2012 standards include no space for filing cabinets as SSA’s digitization of data has reduced the need to store paper. Similarly, the revised standards applied to large facilities, such as the headquarters facilities, reduce office space needs by lowering allocations for personnel and support space. SSA also decreased the number of occupied buildings by 4.7 percent (1,634 to 1,558) from fiscal year 2012 to fiscal year 2016, according to our analysis of SSA facilities data (see fig. 2). For example, SSA decreased the number of field offices through consolidations from 1,273 in fiscal year 2012 to 1,245 in fiscal year 2014, with no further reductions from fiscal year 2014 to fiscal year 2016. Additional reductions in space and the number of occupied buildings resulted in part from consolidating leased office and warehouse facilities into federally-owned facilities at headquarters. Despite the overall decrease in space, SSA’s inflation-adjusted rental costs remained essentially steady until fiscal year 2016, when they decreased, according to our analysis of SSA data. The total inflation- adjusted cost of SSA’s leases was 3 percent lower in fiscal year 2016 than in fiscal year 2012. However, the cost per square foot was slightly higher in fiscal year 2016 than in fiscal year 2012, according to our analysis (see fig. 3). For example, the cost of a lease might increase, due to increased market rates, even if it is renegotiated for less space. In Queens, New York, annual rent for a field office increased by 15 percent in 2015 despite its using 12 percent less space under the terms of a renegotiated lease. As of the end of fiscal year 2016, our analysis of SSA data indicates 65 percent of the total square footage in SSA’s inventory is in buildings where the majority use was field operations (see fig. 4). Field Operations includes, for example, field offices and card centers. Square footage in buildings whose majority use was ODAR (15 percent) and headquarters (13 percent) represent the next largest space users. However, we were not able to calculate the exact number and square footage of different types of offices (such as field offices or hearing offices) due to limitations with SSA’s facility data, which we describe in greater detail later in this report. These limitations also preclude the presentation of comprehensive data on how the composition of SSA’s offices has changed over time. SSA is expanding its remote delivery of services—such as online and other new technologies to connect with the agency—to provide more choices to its customers and because of overall trends in Americans’ use of online services. SSA has had online services available for a number of years, introducing online retirement and disability applications in 2000 and 2002, respectively. It continues to move more services online. For example, SSA launched an online Medicare Only application in 2010 and a new online portal for managing benefits (mySocialSecurity) in 2012. SSA officials said they have plans to introduce online options for other high-volume workloads in the coming years. For example, there were 10.6 million requests for replacement Social Security cards in fiscal year 2016, and as of April 2017 SSA was piloting online requests for replacement cards in 17 states and Washington, D.C., with plans to expand to additional states. There were also over 2 million SSI applications in 2016, and in April 2017 SSA introduced an online SSI application for individuals who meet certain conditions. The number of online transactions completed by SSA customers for benefit applications and certain benefit management services has increased as SSA has expanded the types of services available. For example, from fiscal year 2007 to fiscal year 2016, the number of retirement applications submitted online increased from approximately 220,000 (9 percent of total retirement applications) to approximately 1.4 million (52 percent of applications), according to our analysis of SSA data (see fig. 5). There is wide variation in the use of these online services between field offices, however, with customers in certain areas continuing to conduct the majority of services directly with SSA staff either in person at field offices or over the phone with field office staff. The use of online services varied across the 13 field offices included in this review, including for benefit verification letters (4 percent to 36 percent), disability claims (31 percent to 62 percent), and retirement claims (13 percent to 62 percent), according to our analysis of SSA service delivery data for fiscal year 2016 (see fig. 6). SSA officials attribute this variation across service areas to particular population demographics and needs (e.g., computer literacy). For example, SSA field offices in San Francisco, California, have a relatively low percentage of online claims due to large non-English speaking and homeless populations, SSA officials said. In addition to its main service delivery methods (in person, phone, online), SSA has rolled out a variety of technologies for customers to conduct business with the agency, many of which are self-service. For example, in 2008 SSA introduced self-help personal computers in field offices, which allow customers to complete transactions online at these locations, according to SSA officials. Visitors completed over 390,000 online transactions on these computers in fiscal year 2016. SSA has also increased use of video service delivery, which allows SSA staff to take claims or conduct hearings remotely, either in SSA facilities or in third- party locations such as senior centers. For example, the proportion of hearings on disability claims that were conducted by video increased from 11 percent in fiscal year 2007 to 26 percent in fiscal year 2016, according to SSA data. Another emerging service delivery approach is the desktop icon, which provides a shortcut to access SSA online services on computers in third-party locations, such as libraries or social service agencies. Customers completed about 94,000 transactions by clicking on these icons during fiscal year 2016, according to SSA data. SSA also recently ended a trial of customer service station kiosks in seven SSA field offices and third-party locations, which allowed customers to complete online transactions, print and scan documents, and interact with SSA staff through a video connection. Despite the increase in online transactions relative to field office transactions for certain benefit application and management services, overall demand for field office services has not decreased. As indicated above, our analysis of SSA data for disability and retirement applications and benefit verification letters shows an increasing proportion of these services being completed online. However, our analysis of separate SSA data on total visits to field offices and phone calls to SSA for all services shows that the number of these contacts has not decreased. For example, the number of in-person visits to field offices in fiscal year 2007 (42.9 million) was about the same as in fiscal year 2016 (42.7 million), according to SSA data. The demand for services with SSA staff over the phone has not decreased either (see fig. 7). SSA officials said this may be due to increased demand for certain services and customer preference. For example, overall demand for retirement and some related benefits increased 20 percent from 2009 to 2016, as evidenced by SSA claims data. SSA officials said that given the rise in the overall U.S. population over the last decade—the population increased by 7 percent between 2007 and 2016, according to data from the Census Bureau—the expanded use of online services has likely prevented a substantial increase in visits to field offices, even if in-person visits have not actually declined. SSA expects overall demand for SSA services to continue increasing as the U.S. population ages. In addition, there are still SSA services with significant workloads, such as SSI claims, that are not yet fully available online. SSA has developed strategic goals for expanding remote service delivery while reconfiguring its physical footprint and is beginning to implement initiatives that may help reduce space; however, SSA has not integrated its facility plan with its strategic plan, provided flexibility for individual offices, or compiled accurate facility data as suggested by standards for internal control and leading practices for facility planning. SSA’s 2014-2018 Strategic Plan and Vision 2025—the latter, published in 2015, lays out SSA’s priorities and vision for the agency over 10 years— emphasize the agency’s goal to expand remote service delivery options and adjust its physical footprint to reflect the emphasis on remote services. Also, in 2016 SSA developed a 5-year Real Property Efficiency Plan in response to OMB’s Reduce the Footprint requirements. This plan includes information on, among other things, annual space reduction targets; progress made in reducing domestic office and warehouse space; initiatives to help continue space reductions; and challenges to further reduction. Leading practices for facility planning state that such plans should reflect a decision-maker’s priorities for the future and should meet the goals and objectives in the agency’s strategic plans, including identifying the proper mix of existing and future facilities needed to fulfill its goals. SSA’s 2014-2018 Strategic Plan and Vision 2025 set broad goals, but neither is a facility plan nor includes specific steps to reduce facilities as the agency expands remote service delivery beyond identifying a small number of co- location opportunities. Furthermore, SSA’s Real Property Efficiency Plan is driven by OMB requirements, which are distinct from SSA’s strategic goal to expand remote service delivery, and does not explicitly detail how SSA will change its physical footprint in relation to its strategic goal to expand remote service delivery options. SSA has developed targets for online use by customers, for example increasing the number of online transactions completed by 25 million each year, but it is unclear how those inform its facility planning decisions. In addition, though we found high online usage rates in some of the field offices we analyzed, SSA headquarters officials said a high percentage of online service use by SSA customers has not been a determining factor in local-level decisions on facility space because not all services are available online. As SSA continues to expand remote service delivery and develop plans to reflect the priorities of Vision 2025, there may be more opportunities for it to reconfigure its physical footprint. SSA has recently started making several additional services available online that represent significant workloads, such as the SSI application, which may reduce the number of in-person visits over time and the associated space needs. Further, SSA officials said they are currently developing the Strategic Plan for 2018- 2022, which will implement the long-term Vision 2025 priorities to adjust its physical footprint in anticipation of delivering a greater number of services remotely. However, because SSA lacks a long-term facility plan that identifies the needed composition of its facilities as it moves to emphasize remote service delivery, SSA could be missing opportunities to achieve its strategic goals, including identifying opportunities to reconfigure or reduce some field offices. SSA’s policies and procedures for making space planning decisions have helped achieve space reductions, but may not always provide sufficient flexibility to adapt to changing service demands. Currently, SSA uses service area reviews—conducted periodically—and space standards to inform its decisions about needed space. At the local level, a service area review for an individual field office considers several factors, such as customers’ use of online services, accessibility of the office, and office workload trends, on which to base recommendations for changes, such as, for example, field office consolidations. Unlike service area reviews, space standards are automatically applied to all offices when a lease is expiring to determine how much space each office requires. The space standards, revised in 2012, take into account changing needs to some extent. For example, the revised standards eliminated space for filing cabinets due to the digitization of records and added space to the reception areas because of the continued demand for in-person services (see fig. 8). While the application of the revised space standards has helped reduce SSA’s overall field office space, some SSA officials said the standards do not provide flexibility for individual offices. SSA headquarters officials said the standards are used to determine the amount of space to lease for an office, and after an office is acquired, planning and design efforts are conducted to configure the space. However, three of five regional commissioners we interviewed said the space standards did not provide flexibility to accommodate equipment for emerging technologies, such as an area for self-help personal computers for customers. The standards allot 100 square feet for equipment for emerging technologies in general because, according to SSA headquarters officials, some technologies were not available when the standards were being revised so the actual amount of space needed for them could not have been known. However, offices may not be able to be configured to accommodate the equipment at the time it is delivered without making trade-offs affecting other services. For example, some field offices use interviewing windows, which are needed for in-person service, as a place for self-help computers. In the Wilmington, Delaware, field office, the self-help computers take up two interviewing stations and are located at the end of a long hallway requiring an escort for customers who want to use them (see fig. 9). The space standards also may not provide sufficient flexibility to accommodate unanticipated staff growth, according to 8 of 10 area directors we interviewed. The pre-2012 space standards allowed field offices to request 10 percent more space for potential staff growth; the current standards allot space only for current on-board staff. SSA headquarters officials as well as 4 of 10 area directors said long-term leases make it difficult for SSA to adjust field office space as needs arise, such as to accommodate changes in staff levels. For example, one area director said two field offices in Utah were understaffed at the time of their lease renewal, which resulted in insufficient space in the new location to add staff to meet the office’s service demands. According to SSA officials, the field office space standards currently meet the agency’s needs. Furthermore, they have no plans to reassess the space standards at this time, though they may do so in the future if, for example, changes to service delivery require it. According to federal standards for internal control, management should identify, analyze, and respond to significant internal and external changes to the agency, such as changes in personnel and technology. Absent the flexibility to adjust space to accommodate changes in staff levels or incorporate new service delivery technologies, the quality of service at some field offices may decline because of increased wait times and decreased availability of potentially time-saving service delivery technologies. Therefore, reassessing the space standards as it expands service delivery options could better position SSA to maintain its current level of customer service. Currently, SSA cannot obtain an accurate, automated inventory of the space used by its organizational components in its buildings. As currently configured, SSA’s data system associates a building’s use with the organizational component (e.g., the office type such as field office or hearing office) that occupies the majority of its space. As a result, in cases where more than one office occupies a building, SSA’s data system only counts the office using the majority of the space. Since fiscal year 2015, SSA has been annually developing a list of its buildings and their associated office types, and using this list to meet OMB reporting requirements. However, to develop that list, SSA must manually match records from multiple data sources and OFLM and organizational component staff must coordinate. These are resource-intensive actions that may introduce error in the resulting list. Furthermore, because the list does not identify all offices in each building, it provides partial information on SSA’s inventory. SSA officials said having an easy-to-access inventory of offices would allow them to concentrate their efforts on analyzing data to help with facility planning—for example, determining the number and location of offices for each organizational component. Along these lines, SSA officials said the Real Estate and Lease Tracking (REALT) application they are developing may provide this functionality at a future time. However, SSA officials said they have not yet defined requirements and a timeline to implement changes that would enable REALT to have this functionality. According to our guidance on leading practices for capital decision-making, a critical step to facility planning is to maintain a baseline of current assets using quality information. As SSA develops the REALT application, it will be important to ensure that it is structured to provide the information needed by SSA officials to make effective facility planning decisions. Without ensuring REALT can provide a complete and descriptive inventory of SSA offices using an automated process, SSA will lack useful baseline information to inform its planning efforts. SSA is implementing two co-location initiatives that may help reduce its physical footprint. SSA has co-located approximately 10 percent of its field offices with ODAR permanent remote sites, as of May 2017. SSA recently initiated a co-location pilot program with the Internal Revenue Service (IRS), and is taking steps to evaluate the results to determine the utility of the pilot agency-wide. The goal of the co-location pilot, which combines SSA field offices with IRS Taxpayer Assistance Centers, is to lower each agency’s infrastructure costs. Since it began in January 2017, four IRS staff have moved into SSA field offices. According to an IRS official the pilot will continue through January 2018. The two agencies are collecting customer service information based on visitor data, weekly surveys of SSA field office managers, and customer surveys; they plan to use the results to determine whether to expand the pilot and pursue additional co-location opportunities. The weekly visitor data includes the number of visitors to the pilot locations and the wait times. SSA’s field office managers participating in the pilot complete surveys weekly to determine how much time SSA staff spend on co-location-related activities, such as notifying an IRS employee when customers arrive. SSA officials said they intend to end the co-location pilot if the service deteriorates or proves to be a security risk. SSA has two additional initiatives that could help reduce space, but it is too early to assess whether these efforts could provide space reduction. SSA introduced telework at some field and hearing offices. Telework has the potential to further reduce needed desk space and SSA plans to expand telework agency-wide. However, it may not be possible for some time in field offices due to the nature of SSA programs and the continued high demand for in-person services, according to 7 of 10 area directors we interviewed. SSA is developing two model field offices, which will help SSA further reduce its physical footprint, according to the agency’s 2016 Real Property Efficiency Plan. SSA officials said these model field offices will test, among other things, emerging technologies and new service delivery methods; SSA will incorporate successful processes into existing field offices. One model field office is under construction and the other office is in a design phase. SSA field officials we spoke with said reconfiguring SSA’s facilities can be constrained by finding suitable space in some locations due to high rent prices or limited building stock, and by concerns of community members and SSA staff. For example, it took SSA longer than expected to find an office location in Mountain View, California, that met the agency’s space needs because of the high rental costs. The Douglas, Arizona, area director stated the office is challenged to find an alternate location because Douglas is a small town with a limited number of buildings and the cost of renovating an existing building to suit a field office’s needs would be too high. Similarly, the Hazard, Kentucky, field office currently has excess space, but the limited building options in downtown are in a higher crime area and, therefore, are not acceptable for an SSA field office, according to the field office manager. SSA headquarters officials mentioned other constraints, including a complicated federal leasing process and that some property owners may not want SSA in their space due to the high number of visitors and lack of available public parking. SSA also works within constraints that can come from addressing the concerns of community stakeholders and unions. Elected officials or community leaders may sometimes oppose SSA’s plans to consolidate field offices. For example, though the San Francisco Chinatown and Downtown offices are approximately 1.5 miles apart and SSA’s internal analysis supported consolidation, SSA officials stated the agency decided not to do so due to community concerns about access if the Chinatown office was eliminated. In Kingston, New York, SSA initially retained a contact station after the field office was consolidated with the Poughkeepsie field office, which is approximately 20 miles away, in response to political concerns about Kingston losing an office, according to a local manager. Also, according to SSA officials, employee unions negotiate office layout and design if conditions of employment of the bargaining unit employees are impacted by proposed changes. Additionally, they noted that unions provide input on the impact and implementation of space changes, including ergonomics and security of the field offices. SSA headquarters officials said while the interactions with its employee unions are positive, there are times these interactions can cause delays to individual projects. For example, four of the five regional commissioners we spoke with said union concerns can complicate efforts to co-locate field and hearing offices, requiring buy-in from all three employee unions and increasing the length of the leasing process. The complexity of SSA’s programs can make it challenging for customers to complete certain processes online, especially disability applications, according to SSA officials. Customers’ difficulties with online applications could limit SSA’s ability to shift more of its business online and further reconfigure its physical footprint. More than half the regional commissioners and field office managers we interviewed, as well as front- line staff in three of the four field offices where we interviewed these staff, said this complexity is a challenge; many cited the complexity of disability programs in particular. The online application for disability benefits requires claimants to provide detailed information on their medical and work histories and, according to SSA officials, to navigate through over 10 separate web pages. Several staff we interviewed—three of five regional commissioners and three of seven field office managers— believe the online applications could be improved. For example, one regional commissioner said the online disability application asks the same questions multiple times and this can be confusing. In our observations of in-person disability claims at field offices, we saw examples of the assistance that SSA staff may provide with benefit claims. In one instance, an SSA staff member asked the applicant a number of questions to try and determine the date the claimant stopped working, and ultimately got permission to contact the claimant’s employer. The staff member told us that if the application had been done online, SSA staff almost certainly would have had to follow up with the claimant to complete the application. On the other hand, a number of SSA staff— including five of seven field office managers and front-line staff in four field offices—said some processes such as changes of address or managing direct deposit are simpler and more suited to being completed online. SSA is trying to improve its online services and make them more user- friendly, which may promote greater use of these services and less reliance on in-person services at field offices. For example, SSA is adding new features that make it possible for online customers to interact with SSA staff to resolve problems. It has introduced a click-to-callback option allowing online customers to request a call from an SSA staff member, and a click-to-chat option for a live online conversation; and in fiscal year 2018 or later, SSA plans to introduce click-to-video for a live conversation with video image. In 2015, SSA surveyed customers who had started but failed to complete online benefit applications, with the goal of identifying difficulties and ways to address them, according to SSA officials. The survey found, for example, that the most common reason disability benefit applicants failed to complete an online claim was that they did not understand what the questions meant (30 percent of respondents listed this as a reason). SSA officials told us that, aside from adding a reminder to the final screen of the online application to click “submit,” the survey has not led to any other enhancements to the online application. Furthermore, despite anecdotal evidence that online claims submitted to SSA contain errors, SSA does not track these issues on an on-going basis. Six of seven field office managers and front-line staff in four field offices told us at least some online benefit claims have issues that require staff to follow up with claimants. Front-line staff in several field offices said such follow-ups are common, and staff in one field office said it often takes longer to process an online claim than one submitted in-person as a result. Staff in two field offices cited problems with missing medical release forms and with the start dates claimants gave for their disabilities. SSA officials said they do not collect any data on which online claims require staff follow-up, because staff must decide in each case what is needed and it would be complex to track each time this happens. Standards for internal control in the federal government state agency management should identify, analyze, and respond to risks to meeting agency objectives, including risks related to complex programs and new technologies. Without data on the number and nature of errors in online claims, SSA may miss opportunities for improvements that make the claims process easier for customers and the agency, and that could help SSA further reconfigure its footprint to the extent that more customers migrate to online services. Another challenge with SSA’s online services is data security, which the agency is taking steps to address. Several SSA staff we interviewed—two of five regional commissioners and 3 of 10 area directors—told us customers’ concerns about the security of their personal information are an obstacle to wider use of online services. For example, one area director said data security breaches affecting government agencies have raised public concerns about the security of personal information on the Internet. In a 2015 survey of SSI recipients about their Internet use, SSA found that among responding adult recipients, 74 percent were not very or not at all comfortable with providing their Social Security number online—which is required to use a mySocialSecurity account. SSA officials told us a major component of their effort to protect customers’ data involves complying with federal data security requirements. Officials said the agency completed a risk assessment of its mySocialSecurity portal in 2016, and as a result is developing stronger identity proofing and new multi-factor authentication options. For example, customers with cell phones can now use their phones to further confirm their identities when logging into mySocialSecurity, but SSA is also developing—and plans to introduce in 2017—other multi-factor authentication options, according to SSA officials. Some of SSA’s customers may have difficulty accessing online services, according to SSA staff and data from an SSA survey, which may also limit the agency’s ability to further reconfigure its footprint. Lack of access to the Internet or to a computer was mentioned as an obstacle to wider use of SSA’s online services by four of five regional commissioners and 7 of 10 area directors. One regional commissioner said the major obstacle to expanding remote service delivery is lack of broadband Internet access in rural areas. Additionally, an area director told us low levels of Internet access and computer literacy are challenges in low-income urban areas. According to SSA’s 2015 survey of recipients of SSI—a program for people with limited income—only 34 percent of adult respondents said they use the Internet. Among adult respondents who do not use the Internet, close to half (43 percent) said they either lack a computer or lack Internet access. Some SSA customers simply prefer interacting directly with an SSA staff member to conducting business on-line, according to 9 of 10 area directors and six of seven field office managers we interviewed. Several of these officials said older people and non-English speakers in particular may feel more comfortable with in-person services. A key part of SSA’s strategy to address customers’ challenges with access to online delivery of services has been to make these services available in more locations, from SSA field offices to community-based sites such as public libraries, according to agency officials. As noted previously, in recent years SSA has rolled out self-help personal computers in field offices, giving visitors the option of completing business online on an SSA computer; desktop icons on computers in third-party locations such as public libraries, to link users to SSA services online; and a small number of customer service stations in third-party locations, which offer both online and video connections with SSA (see fig. 10 for images of these technologies). Additionally, SSA uses video service delivery to conduct business such as claims and hearings with customers in remote locations. Some SSA staff and a staff member at a community organization shared the benefits of these new approaches. For example one area director said the self-help personal computers in field offices give customers the option of taking care of their business quickly rather than waiting to speak with an SSA staff member. A field office manager said these computers can help educate the public about the online option for accessing SSA services. With regard to video service delivery, one area director told us it is used to conduct claims and other business with Native Americans on remote reservations, who prefer the personal interaction with SSA provided by the video. A staff member at a community organization that hosts a desktop icon site said that for older, non-English speakers, it has been a helpful alternative to visiting a crowded SSA field office. Staff said they walk these clients through each step of using SSA’s online services. SSA officials also reported some implementation challenges with these new technologies and approaches. Most area directors (6 of 10) and field office managers (five of seven) we spoke with said there have been challenges with the use of self-help personal computers, such as customers who may lack technological skills and need assistance to use the computers. SSA has also experienced some difficulties in working with other organizations to host these new technologies. For example, several SSA staff and a staff member at a host site said some entities have had concerns about including desktop icons on their computers due to issues such as data security and increasing workloads for staff at host sites. While these new service delivery approaches are integral to SSA’s efforts to expand remote service delivery, the agency lacks clear performance goals or targets for them. SSA is collecting data on the use of some of these approaches. For example, it collects data on the number of different types of transactions—such as benefit claims and registrations for mySocialSecurity accounts—completed through the self-help personal computers, the number of transactions completed by clicking on desktop icons, and some data on use of video services. However, SSA has not established performance goals for all of these new approaches. In prior work we have identified setting performance goals and collecting performance information related to these goals as key elements of effective customer service. Without setting meaningful performance goals, for example, for speed, quality, or customer satisfaction, and measuring progress towards these goals, SSA may miss opportunities to improve its service delivery and potentially encourage more customers to use remote services rather than visit field offices. This may be especially relevant in light of the implementation issues raised by some SSA staff. SSA officials told us they have not established set criteria for assessing the desktop icon sites—such as what level of usage indicates success— because each site is different and must be assessed individually. Officials also said there is little cost to the agency for installing these sites apart from staff time. Similarly, officials said they have no set criteria for the success of video services in third-party locations because each site has different needs and must be evaluated individually. They said they rely on anecdotal information from local video service coordinators to determine if each site makes sense. However, strategies do exist for developing performance goals that account for local variation. In a prior report we have recognized this challenge in developing national performance goals, and identified strategies to address it such as providing guidance to local sites but letting them develop their own individualized performance goals. Reconfiguring its physical footprint is critical for SSA, as it strives to meet government-wide goals for reducing the federal footprint and as SSA faces long-term budgetary challenges. The agency has recently made progress in streamlining its space needs, but faces several challenges with its efforts to reconfigure its footprint. Until recently, a significant impediment to reducing or downsizing field offices has been the continuing demand for in-person services, but this trend could change as SSA shifts more and more services online. Without a long-term facilities plan for reconfiguring its field office structure as it expands options for customers to access services remotely and in light of the wide variation in remote service use across offices, SSA could miss opportunities to further reduce its footprint. The agency’s 2012 space standards have contributed to space reductions, yet the standards may in some cases impede effective customer service because they do not provide sufficient flexibility in how SSA uses space to meet local staffing or technology needs. Finally, SSA’s capacity to conduct long-term facilities planning will likely be hampered as long as it lacks a facilities data system that it can use to accurately track the composition of offices in its buildings over time. Similarly, despite SSA’s success in expanding remote services, it could be missing opportunities to make additional progress or improve its customer services. If the agency can encourage greater use of remote services, it will potentially make further reconfiguration of its physical footprint more feasible. For example, SSA does not have data on the incidence and cause of staff follow-ups required with online applicants to inform SSA about how to make these processes better. In addition, unless it establishes clear performance goals and collects related data for alternative service approaches such as desktop icons and video service at third-party sites, SSA risks foregoing opportunities to improve service delivery for customers. We recommend that the Acting Commissioner of the Social Security Administration direct the agency to take the following actions: 1. Develop a long-term facility plan that explicitly links to SSA’s strategic goals for service delivery, and includes a strategy for consolidating or downsizing field offices in light of increasing use of and geographic variation in remote service delivery. 2. Reassess and, if needed, revise its field office space standards to ensure they provide sufficient flexibility to accommodate both unexpected growth in the demand for services and new service delivery technologies. 3. Ensure the REALT application has the capacity to accurately track the composition of SSA’s office inventory over time. 4. Develop a cost-effective approach to identifying the most common issues with online benefit claims that require staff follow-up with applicants, and use this information to inform improvements to the online claims process. 5. For its alternative customer services approaches, including desktop icons and video services in third-party sites, develop performance goals and collect performance data related to these goals. We provided a draft of this report to SSA, OMB, and GSA for review and comment, and also provided a relevant excerpt to IRS. See appendix III for SSA’s written comments. In its written comments, SSA agreed with our recommendations and noted steps it plans to take to enable further reduction in its footprint, such as expanding the use of video and co- locating field and hearing offices. SSA and IRS also provided technical comments on our draft report, which we incorporated as appropriate. OMB and GSA did not provide comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees, the Acting Commissioner of the Social Security Administration, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Barbara Bovbjerg at 202-512-7215 or bovbjergb@gao.gov or David Wise at 202-512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. The objectives of this report were to (1) describe the trends in the Social Security Administration’s (SSA) physical footprint and how it delivers services, (2) assess the steps SSA is taking to reconfigure its physical footprint, and (3) assess the steps SSA is taking to address any challenges to expanding remote service delivery. To address these objectives, we reviewed SSA documents including agency-wide strategic planning documents, facility planning documents, procedures for identifying local facility needs, and SSA studies of customers’ use of online services. We determined that the methodologies of SSA internal studies were sufficient to allow us to report certain findings from these studies. We interviewed headquarters officials at SSA who are responsible for facility and service delivery planning, as well as officials from the General Services Administration (GSA), which works with SSA on facility planning; the Office of Management and Budget, which has established a government-wide space reduction initiative; the Internal Revenue Service, which according to agency officials has a pilot co- location initiative with SSA; and the Department of Veterans Affairs and U.S. Postal Service, which are also taking steps to reduce their footprints. We also interviewed officials and reviewed documents from external organizations including the Social Security Advisory Board, the National Academy of Social Insurance, the National Academy of Public Administration, and the American Federation of Government Employees. We applied criteria previously identified by GAO for facility planning and customer service standards, as well as standards for internal control in the federal government. In addition, we analyzed SSA administrative data and conducted field work through site visits and phone interviews (see below for more information on these methodologies). Finally, we reviewed pertinent federal laws and regulations. To describe trends in the composition of SSA’s facilities and develop an inventory of the majority use of buildings SSA occupies for fiscal year 2016, we obtained data from SSA on its facilities for fiscal years 2006 to 2016. SSA’s facility data are compiled from two data sources. First, the office name, location, usable and rentable square feet, annual rent, lease start and expiration dates are from GSA’s Rent on the Web database, which is a public database. The office code and office type are from an SSA internal database. The data for each fiscal year are based on the facilities SSA had as of the September 15 billing date of that year. We assessed the reliability of these data by conducting electronic data tests and interviewing knowledgeable officials about how data are collected and maintained and their appropriate uses. We found the data we reported to be sufficiently reliable for purposes of our reporting objectives. Nonetheless our analysis was constrained due to limitations with SSA’s facility data. We could not develop an exact count of all of SSA’s offices, such as area offices, field offices, and hearing offices, in fiscal year 2016 because of the structure of the data set. Specifically, the records in the data set represent individual leases. Each lease is categorized according to a “space type” denoting a type of SSA office, such as area office, field office, and hearing office. Leases may be associated with a single SSA office or with multiple offices. In cases when a lease is associated with multiple offices, the lease’s “space type” is categorized according to the office representing the largest amount of square footage among the offices associated with the lease. For example, if a lease is associated with an area office and a field office, and the area office occupies more space, then the lease’s “space type” is set to area office as the majority use of the space—and we would not know that the lease is also associated with a field office. Thus, rather than presenting an inventory of SSA offices by “space type,” we presented an inventory of buildings occupied by SSA according to the majority use of the SSA-occupied space in the building. We categorized each building according to the majority-use “space type” of the SSA lease associated with the building. When a building is associated with multiple SSA leases, we identified the lease’s space type representing the largest amount of square footage, and categorized the building according to that lease’s “space type.” In almost all cases, when there are multiple leases associated with a building, we identified one lease that represented the majority of the SSA- occupied square footage in the building. In less than 1 percent of the buildings, there was no office representing the majority use, so we categorized the building according to the office with the largest amount of space. To describe trends in how SSA delivers services to the public, we used data from several SSA sources. The time frames for the data vary among the sources, ranging up to 11 years of historical data (fiscal years 2006 to 2016). Unless otherwise noted, we obtained data at the national, aggregate level. To examine trends in how the public accesses specific SSA services—such as retirement applications, disability applications, and benefit verification letters—we used SSA’s eService Statistics Quarterly Tracking Reports. These reports are drawn from a variety of SSA data sets and tools, including Management Information Central, Google Analytics, and the Executive and Management Information System. They provide data on the number and percentage of customer transactions for various services that were provided through the Internet each year. We also used the reports to calculate the number and percentage of transactions that were provided through all service channels other than the Internet. Other service channels include in-person visits to field offices, phone calls to field offices, and calls to the national 800 number; the reports do not distinguish between these specific non-Internet service channels. We obtained and analyzed reports for fiscal year 2007 to 2016. To examine trends in the number of in-person visits to field offices, we used data from the SSA Unified Measurement System Customer Service Record. We obtained annual data on in-person visits for fiscal years 2006 to 2016. To examine trends in the number of phone calls received by field offices, we used data from SSA’s Avaya Reporting System. Data on phone calls received by field offices were only available for fiscal years 2012 to 2016. To examine trends in the number of phone calls to SSA’s national 800 number, we used data from the Cisco Unified Intelligent Contact Management (Unified ICM) System. We obtained annual data on the number of calls received for fiscal years 2006 to 2016. To examine trends in the proportion of hearings conducted by video, we used data from the Case Processing Management System. We obtained annual data on total hearings held and hearings held by video for fiscal years 2007 to 2016. To examine geographical variation in how customers access services, we used data from SSA’s Local Management Information, which draws from several SSA databases. We obtained data for the 13 field offices that were covered in our site visits and interviews. For these offices, we obtained fiscal year 2016 data on the number of retirement applications, disability applications, and benefit verification letter requests completed online and through other service channels (which can include field offices and in some cases the national 800 number). To examine the number of times customers used certain self-service delivery technologies, we used data from two different SSA systems. We used data on the number of transactions through self-help personal computers during fiscal year 2016 from a MySQL database that SSA uses to record these transactions. We used data on the number of times customers accessed desktop icons during fiscal year 2016 from a Google Analytics tool that SSA developed to record these transactions. To examine the workload for certain SSA services that are not yet fully available online, we obtained data from different sources. We used data on the number of Supplemental Security Income applications during fiscal year 2016 from the agency’s District Office Workload Report, which draws from several other SSA databases. We used data on the number of replacement Social Security card requests during fiscal year 2016 from the SSA Unified Measurement System Counts Data Warehouse. We assessed the reliability of these data by interviewing and obtaining written responses from SSA officials and by reviewing documentation such as data dictionaries. We determined the data to be sufficiently reliable for our reporting purposes. We conducted site visits to four states and interviewed 10 area directors and 5 regional commissioners, using field offices as our unit of selection. We used a four-step process to identify field office locations. We began with a list of all SSA field offices as of June 2016. This list was used to identify offices that had been part of an office consolidation since fiscal year 2013 or had a service area review conducted between January 2014 and June 2016. Step 1: We narrowed the list to identify offices with a lease expiration date between January 1, 2015, and December 31, 2019. Step 2: We used the fiscal year 2015 visitor to staff ratio to divide the resulting list of 65 field offices into five equal groups. Step 3: We selected a non-generalizable sample of three field offices from each group to obtain a range of regions, Internet use, and urban/rural designation. Because there were more field offices with service area reviews than office consolidations we selected one “consolidation” field office and two “service area review” field offices from each group. Step 4: From the list of 15 field offices, we selected 10 for interviews with area directors based on the purpose of the service area review (when applicable) and “unique” characteristics. For example, we selected the downtown St. Louis, Missouri, field office because the service area review was conducted to explore the possibility of consolidating the office with the St. Louis Central West End office. One reason we chose the Billings, Montana, field office is because it has one of the largest service areas (45,000 square miles), resulting in potentially very long distances to reach the field office. In addition to the Wilmington, Delaware, field office, our pilot site visit location, we chose three additional site visit locations from the 10 selected field offices based on (1) unique services provided at the selected field office or in the surrounding area such as being co-located with an ODAR permanent remote site or having a video unit linked to an external location (e.g., another city or town) and (2) geographic diversity. To be consistent with the selection process above, we chose one office that had been part of a consolidation and two that had a service area review. At each of the three site visit locations aside from Wilmington, Delaware, we visited two field offices, a hearing office, and a third-party location that provided remote service delivery (e.g., a Korean community organization with a Social Security Express desktop icon). The first field office was selected from the original list of 10 offices discussed above. The second field office in each location was chosen based on its proximity to the first field office or a hearing office that we visited. For example, we selected the Lexington, Kentucky, field office because it is located in the same building as the hearing office we visited. As a result, in total we visited or interviewed the area director associated with 13 field offices (see table 2). During each visit we interviewed the field office manager, hearing office manager, and chief Administrative Law Judge, and conducted a group interview with a random selection of field office staff. In total we spoke with 30 field office staff. We also observed staff-customer interactions at the field offices that were selected; observed hearings; and toured each field and hearing office. The results of our site visits and interviews with field staff are not generalizable to all of SSA’s field and hearing office staff. Finally, we interviewed five regional commissioners. We interviewed the regional commissioners for the four offices selected for site visits, including the regional commissioner for the Wilmington, Delaware, office, as well as the commissioner for one region where a high number of service area reviews had been conducted. Appendix II provides additional information about the three field offices chosen as site visit locations based on our site selection process. See appendix I for more information on our site selection process. The office has experienced a decline in staff and has excess space, Footprint: 10,678 sq. ft. Square footage per staff: 562 Annual Visitors: 19,013 according to local officials. However, officials said there have been challenges with downsizing the office space. For example, officials explained that SSA has been unable to identify another agency component—such as a hearing office—to share the building occupied by the field office. Many customers in the service area prefer direct interaction with SSA staff to online services, due to factors including poor internet access in the area, according to a local official. Poughkeepsie field office in March 2014, with the Poughkeepsie office absorbing Kingston staff and customers, according to SSA officials. The consolidation has created challenges for the Poughkeepsie field office, as there is higher walk-in traffic but further expansion of the office space is not possible, according to the field office manager. Additionally, according to staff at a Kingston social service agency, it can be challenging for some individuals in their area to get to the Poughkeepsie field office due to poor public transit connections. The field office is located in downtown Poughkeepsie, and is responsible for a service area that covers three counties, is somewhat rural, and includes people with a mix of income levels, according to the field office manager. office have been disruptive and caused several closures in recent years, according to field office staff. threatening or violent behavior. According to the manager, approximately 25 customers have been banned from the office since 2015. The field office is in a densely- populated urban area with diverse populations–both economically and linguistically. In addition to affluent neighborhoods, the office serves a large number of homeless individuals. The service area population is expected to continue growing. The office has one of the highest numbers of walk-in visitors in the Bay Area due to a large number of homeless individuals and limited English speakers. Over half of the office staff members are bilingual. In addition to the contacts named above, Erin Godtland (Assistant Director), Michael Armes (Assistant Director), Lorin Obler (Analyst-in- Charge), Swati Deo, Brian Wanlass, Sydney Petersen, Susan Aschoff, James Bennett, Nicole Jarvis, Lisa Pearson, James Rebbe, Jerome Sandau, Monica Savoy, and Almeta Spencer made key contributions to this report. High Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Social Security: Improvements to Claims Process Could Help People Make Better Informed Decisions about Retirement Benefits. GAO-16-786. Washington, D.C.: September 14, 2016. Federal Real Property: GSA Could Decrease Leasing Costs by Encouraging Competition and Reducing Unneeded Fees. GAO-16-188. Washington, D.C.: January 13, 2016. Social Security Administration: Long-Term Strategy Needed to Address Key Management Challenges. GAO-13-459. Washington, D.C.: May 29, 2013. Federal Real Property: Strategic Partnerships and Local Coordination Could Help Agencies Better Utilize Space. GAO-12-779. Washington, D.C.: July 25, 2012. Federal Real Property: National Strategy and Better Data Needed to Improve Management of Excess and Underutilized Property. GAO-12-645. Washington, D.C.: June 20, 2012. Social Security Administration: Better Planning Needed to Improve Service Delivery. GAO-10-586T. Washington, D.C.: April 15, 2010. Social Security Administration: Service Delivery Plan Needed to Address Baby Boom Retirement Challenges. GAO-09-24. Washington, D.C.: January 9, 2009.
SSA has one of the largest physical footprints of any federal agency. It has about 1,500 facilities nationwide, including field offices where customers can meet with SSA staff to apply for benefits and conduct other business. SSA is re-examining its footprint in light of expanding online and other remote service options and a 2012 government-wide initiative to make more efficient use of physical space. GAO was asked to examine SSA's changing footprint and service delivery. This report (1) describes the trends in SSA's physical footprint and service delivery, (2) assesses the steps SSA is taking to reconfigure its footprint, and (3) assesses the steps SSA is taking to address any challenges to expanding remote service delivery. GAO reviewed SSA documents and data on facilities and service delivery for fiscal years 2006 to 2016; interviewed officials from SSA and other federal agencies; and visited SSA facilities in four states, chosen for diversity in geographic location, visitor to staff ratio, and proportion of local residents with Internet access, among other factors. The Social Security Administration (SSA) has reduced its physical footprint and expanded delivery of services remotely, including online. SSA reduced the total square footage of its facilities by about 1.4 million square feet (or about 5 percent) from fiscal years 2012 to 2016, according to GAO's analysis, by applying new standards for determining the size of offices and consolidating facilities (see figure). SSA has also expanded the services it offers remotely, and online use has increased for certain services such as disability and retirement applications. Despite this increase, in-person contacts at field offices have not changed substantially, with about the same number in fiscal year 2016 as in fiscal year 2007 (approximately 43 million). This may be due to growing demand for services as well as certain services not yet being fully available online. SSA's steps to reconfigure its footprint do not fully incorporate changes in service delivery, such as the expansion of remote service delivery. As mentioned above, SSA has been expanding the services it delivers online. While SSA has a strategic goal of re-thinking its footprint as it expands remote service delivery, it lacks a facility plan that links to this goal, as called for by facility planning criteria. Without a plan that considers the increasing use of online services and wide variation in online service use across field offices, SSA may miss opportunities to further reduce its footprint. SSA is taking steps to make remote services easier to use, for example by adding new features to its website and offering alternate approaches for accessing services, but does not consistently evaluate them, which could limit its ability to shift more services online and further reconfigure its footprint. For example, SSA has added features allowing online customers to interact directly with SSA staff. However, SSA does not track staff follow-ups to deal with any errors in online benefit applications in order to improve them, as called for by federal internal control standards. To enhance access to remote services, SSA has introduced alternate service approaches such as videoconferencing in third-party sites; however, it does not have performance goals for these approaches. GAO has previously identified performance goals as a best practice, which may help agencies improve their customer service. GAO is making five recommendations, including that SSA develop a facility plan for reconfiguring its footprint as it expands remote service delivery, track staff follow-ups of online applications, and develop performance goals for alternate service approaches. SSA agreed with GAO's recommendations.
OMB Circular A-126 and DOD guidance require that DOD’s aircraft for special air missions operate primarily for official purposes to: perform agency responsibilities, as directed by the Secretary of Defense or the President; transport officials that the President or the Secretary of Defense designate due to the need for security, secure communications, or exceptional scheduling; and transport officials for other agency business, such as giving speeches, attending conferences, and making routine site visits. DOD guidance also states that certain officials, such as the Secretary of Defense and Chairman of the Joint Chiefs of Staff, are required at all times to use military aircraft in lieu of commercial aircraft. Because of this requirement, these officials are authorized to fly on military aircraft for unofficial travel. Guests may accompany authorized senior federal government officials when space is available; however, schedulers may not select a larger aircraft to accommodate guests. Moreover, guests are required to pay the U.S. Treasury the cost of a coach-class ticket for a comparable itinerary. The 89th Airlift Wing, which is headquartered at Joint Base Andrews in Maryland, provides worldwide transportation for cabinet members, members of Congress, and other high-ranking dignitaries of the United States and foreign governments. Within the 89th Airlift Wing, the Presidential Airlift Group manages air transportation exclusively for the President. The Office of Special Air Missions, which is within the Office of the Air Force’s Vice Chief of Staff, coordinates and schedules the 89th Airlift Wing’s special air missions. As of December 2013, the 89th Airlift Wing supported special air missions by maintaining a fleet of 15 aircraft, which was composed of 5 C-20Bs (equivalent to a Gulfstream III), 6 C-37A/Bs (equivalent to a Gulfstream V) and 4 C-32s (equivalent to a Boeing 757). Figure 1 show examples of these aircraft. Appendix II provides details on the capabilities of aircraft in the 89th Airlift Wing. The 89th Airlift Wing’s flight operations are funded from annual appropriations to the Air Force’s operation and maintenance account. The Air Mobility Command manages the portion of the account that funds the wing. For fiscal year 2012, the wing reported an annual operating budget of $70.7 million. OMB, DOD, and White House guidance outlines the process that executive branch agencies, the White House, and Congress follow to request, approve, prioritize, and schedule travel by senior government officials on military aircraft (see fig. 2). Senior federal government officials seeking to travel on 89th Airlift Wing aircraft must submit their requests for review and agency-level approval within their respective agencies. The agency is expected to review the itinerary and passenger list, ascertain whether the trip is for official reasons, and determine whether the agency should reimburse DOD for the cost of the travel. When the travel is directed by the President, the request is reviewed and approved by the Office of the Secretary of Defense’s Executive Secretary. Members of Congress submit their requests to the Office of the Assistant Secretary of Defense for Legislative Affairs for review and approval. Requests from executive branch agencies, including those prepared by DOD, are submitted to the Office of the Secretary of Defense’s Executive Secretary for review and approval. Once approved, requests are forwarded to the Air Force’s Office of Special Air Missions for scheduling and a unique five-digit identifying number is assigned for each mission. If the 89th Airlift Wing does not have sufficient aircraft to accommodate all of the requests for travel, the Office of Special Air Missions works with the agencies to accommodate their requests. Accommodating requests may entail changing the time or dates, or using aircraft from sources outside of the 89th Airlift Wing. If Air Force schedulers cannot accommodate competing requests, schedulers plan the missions according to the traveling officials’ priority rankings, which are shown in table 1. Federal agencies generally are required to reimburse DOD for the cost of special air missions flown by the 89th Airlift Wing. After the completion of a special air mission that is reimbursable to DOD, the Office of Special Air Missions sends a memo to Air Mobility Command’s Financial Management Directorateincludes an invoice for the mission’s costs. The Financial Management Directorate reviews the invoice information and, if approved, forwards it to the Defense Finance and Accounting Service (DFAS), which inputs the invoice information into the Air Force’s General Accounting and Finance System. DFAS notifies the Department of the Treasury to transfer funds to collect payment for the mission. The memo for the mission’s costs from the traveling agency’s budget account into the Air Force’s budget account. Once the transfer of funds is complete, DFAS records the reimbursement in the Air Force’s accounting system. This process is shown in figure 3. In accordance with OMB Circular A-126, DOD calculates the reimbursement amount for special air missions by multiplying a predetermined hourly rate of the particular type of aircraft by the number of hours flown. The Air Force sets the hourly rate—based on historical actual costs and projected mission costs—to cover fuel, maintenance, and the salaries of military personnel who plan and execute the missions. Table 2 shows the reimbursement rates for federal agencies for fiscal years 2008 through 2012. These reimbursement rates do not include the cost of in-flight services such as catering and secure communications, which are provided by DOD-approved private contractors and are billed separately. From fiscal year 2008 through fiscal year 2012, the 89th Airlift Wing flew a total of 2,513 special air missions—522 special air missions during fiscal year 2012, a 13-percent increase over the 463 missions that it flew during fiscal year 2008. The cost of special air missions ranged from about $17 million in fiscal year 2008 to about $26 million in fiscal year 2012, and the fees paid for secure communication services ranged from about $4 million in fiscal year 2008 to $7 million in fiscal year 2012. From fiscal year 2008 through fiscal year 2012, the 89th Airlift Wing flew 2,513 special air missions to transport senior federal government officials at a cost of about $96 million. As table 3 shows, over the 5-year period that we reviewed, the number of special air missions—which include reimbursable and nonreimbursable missions—flown by the 89th Airlift Wing generally increased. Overall, the number of special air missions increased by 13 percent from fiscal year 2008 (463 missions) through fiscal year 2012 (522 missions). Officials from the 89th Airlift Wing told us that this increase in special air missions was caused by many factors, including greater demand from DOD and other federal agencies for 89th Airlift Wing aircraft and the limited availability of other military and government-operated aircraft to transport senior federal government officials. The annual cost of special air missions fluctuated from a low of $14 million in fiscal year 2009 to a high of $26 million in fiscal year 2012. Officials attributed this cost fluctuation to the increase in the number of missions flown, as well as cost changes in fuel, aircraft maintenance, and military personnel salaries. The federal agencies with the largest number of special air missions were DOD (1,928 missions) and the Department of State (220 missions), as shown in figure 4. Members of Congress had the next highest number of missions with 133 missions. Collectively, these two agencies and Congress accounted for about 91 percent of all special air missions flown by the 89th Airlift Wing. Examples of why agencies used 89th Airlift Wing aircraft—instead of their own agencies’ aircraft or commercial aircraft— include: An official from the Department of State was directed by the President to travel on a nonreimbursable basis in furtherance of foreign policy, for which commercial aircraft would not be able to provide the necessary secure communications and security. The Secretary of Veterans Affairs traveled on a reimbursable basis due to exceptional scheduling demands. The Attorney General traveled on a reimbursable basis because the Department of Justice’s Gulfstream aircraft was unavailable while being used on another mission. See appendix III for a complete breakout of the special air missions flown by the 89th Airlift Wing, by traveling agency and by fiscal year. Of the four types of military aircraft that the 89th Airlift Wing used for special air missions during fiscal years 2008 through 2012, the C-20B (Gulfstream III) was used most often to transport senior federal government officials, accounting for over 40 percent of all missions. The C-20B can carry up to 12 passengers and is used primarily for travel within the continental United States. The C-37A (Gulfstream V) and C- 37B (Gulfstream V) also can carry up to 12 passengers, but can fly more than twice the distance and hours as the C-20B. The C-32A (Boeing 757) can carry up to 46 passengers, and can fly about the same distance and hours as the C-37A and C-37B. Table 4 shows the number of special air missions flown on each of the 89th Airlift Wing’s aircraft, and by fiscal year. Appendix II provides more information on the capabilities of these aircraft. During fiscal years 2008 through 2012, federal agencies paid DOD about for providing secure communications services (e.g., secure $29 milliontelephone calls, e-mail, and Internet access) on 89th Airlift Wing special air missions (see table 5). Annual fees for secure communications services ranged from about $4 million in fiscal year 2008 to $7 million in fiscal year 2012. Agencies pay DOD for secure communications services that are used during flights. Each agency is charged a fee of $500 to $1,000 per special air mission for secure communications services and pre-flight equipment testing, regardless of whether agencies use this capability. Additional charges are applied on a per-unit basis, so agencies are charged per minute of telephone time and per kilobyte of data downloaded. Agencies that fly frequently on military aircraft can enter into an annual support agreement with the 89th Airlift Wing that includes an estimate of charges it will pay for secure communications services in the upcoming fiscal year. DOD and other federal agencies followed DOD guidance on the high- priority movement of senior federal government officials on military aircraft. This guidance addresses the request, approval, prioritization, scheduling—and in DOD’s case—reimbursement from non-DOD agencies for special air missions. However, DOD does not identify and track whether it was reimbursed for each special air mission. DOD guidance does not specify that DOD officials should track reimbursements by special air mission. However, internal control standards emphasize the importance of activities, such as documenting transactions, to help ensure that management directives are carried out— in this case, that each special air mission was reimbursed when required. We found that DOD was unable to document reimbursement for 16 (about 9 percent) of 180 special air missions; the 16 missions cost about $353,000 during fiscal years 2009 through 2012. DOD established processes governing the use of military aircraft by senior federal government officials from DOD, other executive branch agencies, the White House, and Congress, including a process for collecting reimbursements when required. These processes outline procedures for requesting, approving, and prioritizing travel to minimize costs and to effectively use limited resources. For example, the Office of Special Air Missions established guidance outlining its process for contacting and coordinating with traveling agencies prior to scheduling a mission. Agencies follow the processes outlined in the guidance in order to be approved to use military aircraft. Further, the processes implement the requirement that federal agencies reimburse DOD for the cost of special air missions flown by the 89th Airlift Wing. Traveling agencies reimburse at a rate that the Air Force sets annually for each aircraft type to cover the cost of fuel and maintenance, contractor fees, and salaries of For example, for military personnel who plan and execute the missions.fiscal year 2012, the Air Force set a reimbursement rate of $3,273 per flying hour for each of six C-37A/B aircraft in the 89th Airlift Wing. Since our 1999 report, in which we found that DOD did not collect payments for all billed reimbursable missions, DOD established partnership agreements with several agencies that travel frequently on military aircraft. These agreements allow for electronic transfer of funds via the Department of Treasury’s Intra-governmental Payment and Collection System to cover the cost of a completed special air mission. DFAS officials told us that under these partnership agreements, when a special air mission is approved and scheduled, the traveling agency agrees to pay an estimated amount of money to cover the cost of that mission. After the 89th Airlift Wing notifies DFAS that a mission has been completed, DFAS transfers funds from the traveling agency to DOD to cover mission costs. If a traveling agency does not have a partnership agreement with DOD to enable transfer of funds through the Intra- Governmental Payment and Collection System, DFAS will bill the agency for reimbursement costs. We determined that traveling agencies should have reimbursed DOD approximately $5 million for 180 (about 9 percent) of the 2,050 special air missions flown during fiscal years 2009 through 2012. However, DOD does not identify and track the billing and reimbursement for each special air mission and officials could not document whether agencies paid all required reimbursements. Of the 180 reimbursable special air missions flown by the 89th Airlift Wing during fiscal years 2009 through 2012, 77 missions involved agencies that, given the sensitivity of their work, are not identified in this report at DOD’s request.special air missions were the Department of State (49 missions) and Department of Homeland Security (34 missions). Collectively, these three groups of agencies accounted for about 89 percent of reimbursable special air missions flown on the 89th Airlift Wing’s aircraft. Table 6 shows the number and cost of reimbursable special air missions by each traveling agency during fiscal years 2009 through 2012. From fiscal years 2008 through 2012, the Air Force’s 89th Airlift Wing flew senior federal government officials on more than 3,400 special air missions. DOD has developed a process to manage requests, approvals, and scheduling that is consistent with OMB and DOD guidance. However, without a process to identify and track the costs and applicable agency reimbursements on a mission-by-mission basis, and DOD is unable to document whether it was paid for all required reimbursements. Consequently, DOD may not be receiving required reimbursements for the cost of special air missions. To ensure that federal agencies fully reimburse DOD for special air missions when required to do so, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) and Chief Financial Officer to revise guidance and require that DOD organizations develop and implement a process that identifies and tracks each special air mission, such as tracking by mission number or another unique identifier, from billing through reimbursement for all applicable costs. We provided a draft of this report to DOD for official review and comment. In its written response, reproduced in appendix IV, DOD concurred with our recommendation and provided additional comments. DOD also provided technical clarifications, which we incorporated where appropriate. In its written comments, DOD stated that our recommendation would address the documentation problems that we identified in the reimbursement process. Specifically, DOD stated that it anticipates issuing further regulatory guidance to address our recommendation, and the Office of the Secretary of Defense’s Executive Secretary has asked federal agencies requesting the use of military aircraft to include a line of accounting code in each request memo. According to DOD, this accounting code, along with the Air Force’s unique five-digit number for each special air mission, will ensure repayments of appropriate mission expenses. We believe that DOD is taking a step in the right direction to improve its internal controls by including a line of accounting code in the request memos, which would identify the agency budget account to charge and would make the billing process quicker. However, agencies often reimburse for multiple missions with one payment under the same budget account, which would still not identify whether each special air mission was reimbursed. We continue to believe that DOD should use the unique identifiers that it assigns to each special air mission to track each mission throughout the billing and reimbursement process to better ensure that DOD is fully reimbursed, as appropriate, for each mission. Doing so would allow DOD to fix a gap in internal controls that does not allow DOD to readily match billing with subsequent reimbursements for each special air mission and would fully address our recommendation. During our review, we and DOD officials had to expend considerable time and effort to identify, obtain, and analyze documentation to determine whether the department was appropriately reimbursed for each special air mission. In its written comments, DOD also stated that we did not include a large portion of the missions flown by the 89th Airlift Wing that supported travel by the President, Vice President, and their families in this review. We recognize that these missions are a large part of the 89th Airlift Wing’s total missions; however, the scope of this review focused solely on travel by high-level executives within DOD and other federal agencies. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense (Comptroller); the Secretary of the Air Force; the Office of Management and Budget; General Services Administration; the White House Military Office; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To determine the extent to which senior federal government officials have used military aircraft and the costs associated with this travel, we obtained and analyzed information from the 89th Airlift Wing regarding special air missions during fiscal years 2008 through 2012 for executive branch agencies and Congress. Specifically, we tabulated the number of special air missions flown by agency and by fiscal year. We excluded from our review special air missions on aircraft outside of those in the Air Force’s 89th Airlift Wing that DOD infrequently uses to transport senior officials. The aircraft used outside of the 89th Airlift Wing include assets of the Air Force’s 310th Air Squadron located at MacDill Air Force Base, Florida; the 201st Air Squadron at Joint Base Andrews, Maryland; the 932nd Airlift Wing at Scott Air Force Base, Illinois; as well as aircraft of the Navy, Army, U.S. Pacific Command, U.S. European Command, and U.S. Africa Command. To determine the costs associated with the 89th Airlift Wing’s special air missions, we calculated the cost of special air missions flown during fiscal years 2008 through 2012 using the flight duration of the mission and the cost per flying hour associated with the type of aircraft used. The cost per flying hour is calculated by the Air Force at the end of each fiscal year by dividing the actual annual costs for fuel, maintenance, and the salaries of military personnel who plan and execute the missions by the number of hours flown for each aircraft. We analyzed the data for any anomalies, such as inaccurate or missing information. Further, we met with officials from the Air Force’s Office of Special Air Missions and the White House Military Office to discuss how the mission data was accumulated and validated for accuracy and completeness. We determined that the data on the number of special air missions flown, the names of the traveling agencies that flew on these missions, and the costs associated with these missions were sufficiently reliable for the purposes of our review. To determine the extent to which agencies have followed guidance governing the use of military aircraft by senior federal government officials, we analyzed relevant directives, instructions, manuals, and other guidance issued by DOD, OMB, and the White House, reviewed relevant laws, and met with cognizant officials from these organizations, and from the General Services Administration and the Department of the Treasury. We compared the organizations’ guidance and the Standards for Internal Control in the Federal Government to the request, approval, scheduling, and reimbursement processes adopted by the 89th Airlift Wing, Air Force’s Office of Special Air Missions, Air Force’s Air Mobility Command, DFAS, and the White House Military Office to determine whether the processes were in accordance with the guidance and internal control standards. This review does not include travel by the President, Vice President, First Lady, and Second Lady. To identify those special air missions for which DOD should be reimbursed and the amounts to be reimbursed, we analyzed Air Force and DFAS billing and payment data, which were available for the past four years from fiscal years 2009 through 2012. To determine if DOD received payments for the identified reimbursable missions, we reviewed billing and reimbursement documentation and met with officials from DFAS and Air Mobility Command to discuss their process for collecting and tracking reimbursements. We determined that the reimbursement data were sufficiently reliable for the purposes of our review. We conducted this performance audit from March 2013 to February 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Passenger capacity (with ravens) 12 (10) Vice President, Secretary of State, Chairman of the Joint Chiefs of Staff 12 (10) Vice President, Secretary of State, Secretary of Defense, Deputy Secretary of Defense, Chairman of the Joint Chiefs of Staff, Vice Chairman of the Joint Chiefs of Staff 12 (10) Appendix III: 89th Airlift Wing’s Number of Special Missions, by Traveling Agency and by Fiscal Year (in Alphabetical Order) In addition to the contact named above, Marc Schwartz, Assistant Director; John Beauchamp; Richard Burkard; Cynthia Grant; Hillary Hampton; Gina Hoffman; Amie Steele Lesser; Richard Powelson; and Christine San made key contributions to this report.
Senior federal government officials—including high-ranking DOD officials, cabinet members, and members of Congress—are required or authorized to fly on military aircraft. This high-priority movement of senior government officials, known as special air missions, is accomplished with a fleet of 15 aircraft assigned to the Air Force's 89th Airlift Wing, located at Joint Base Andrews, Maryland. Both the Office of Management and Budget and DOD issued guidance on the management and use of these aircraft. GAO was requested to examine government officials' use of military aircraft and the regulations and policies that govern such travel. GAO examined the extent to which (1) senior federal government officials have used military aircraft and the costs associated with this travel, and (2) agencies have followed guidance governing the use of military aircraft by senior federal government officials. GAO reviewed relevant legislation and guidance, DOD's processes, and special air mission data from fiscal years 2008 through 2012. This review does not include air travel by the President, Vice President, First Lady, and Second Lady. The Air Force's 89th Airlift Wing flew 2,513 special air missions for senior federal government officials during fiscal years 2008 through 2012, with the number of missions increasing by 13 percent from 463 missions in fiscal year 2008 to 522 missions in fiscal year 2012. The cost of special air missions ranged from about $17 million in fiscal year 2008 to about $26 million in fiscal year 2012, and the fees paid for secure communication services ranged from about $4 million in fiscal year 2008 to $7 million in fiscal year 2012. The federal agencies with the greatest number of special air missions were the Department of Defense (DOD) and the Department of State. Members of Congress had the next highest number of missions. Collectively, these agencies and Congress constituted about 91 percent of all special air missions flown by the 89th Airlift Wing. DOD and other federal agencies followed DOD guidance on the high-priority movement of senior federal government officials on military aircraft, but DOD does not identify and track reimbursements for each mission. This guidance addresses the request, approval, prioritization, scheduling, and reimbursement from non-DOD agencies for special air missions. Federal internal control standards emphasize the importance of internal control activities, such as documenting transactions and events, to help ensure that management's directives are carried out. However, DOD does not identify and track whether it was reimbursed for each special air mission. GAO determined that traveling agencies should have reimbursed DOD approximately $5 million for 180 special air missions flown during fiscal years 2009 through 2012. However, GAO found that DOD was unable to document reimbursement for 16 of the 180 special air missions. Specifically, 2 of the 16 missions were not billed to the proper agency in fiscal year 2010. DOD officials stated that the remaining 14 missions were properly billed, but officials could not document reimbursement. DOD officials told GAO that they do not routinely identify and track reimbursements for each mission and that DOD's guidance does not specifically require organizations to identify and track each special air mission throughout the billing and reimbursement process, such as tracking by mission number or another unique identifier. This is partly because agencies at times make one payment to DOD for multiple missions. However, without a process to identify and track each special air mission for reimbursement, DOD cannot ensure that it has been fully reimbursed as required by DOD guidance. GAO recommends that DOD ensure federal agencies fully reimburse for special air missions when required by developing a process to identify and track each mission, such as by mission number or another unique identifier, from billing through reimbursement for all applicable costs. In written comments on a draft of the report, DOD concurred with the recommendation.
The Magnuson-Stevens Act provides for the conservation and management of fishery resources in the United States. The act established eight regional fishery management councils that are responsible for preparing plans for managing fisheries in federal waters and submitting them to the Secretary of Commerce for approval. NMFS, within the Department of Commerce’s National Oceanic and Atmospheric Administration, is responsible for implementing these plans. The eight councils are New England, Mid-Atlantic, South Atlantic, Gulf of Mexico, Caribbean, Pacific, North Pacific, and Western Pacific. The Magnuson-Stevens Act, as amended by the Sustainable Fisheries Act, also establishes national standards for fishery conservation and management. The fishery councils use these standards to develop appropriate plans for conserving and managing fisheries under their jurisdiction. For example: National Standard 1 requires that conservation and management measures prevent overfishing while achieving, on a continuing basis, the optimum yield from each fishery; National Standard 4 requires that conservation and management measures not discriminate between residents of different states; National Standard 5 requires that conservation and management measures, where practicable, consider efficiency in the use of fishery resources; and National Standard 8 requires that fishery conservation and management measures take into account the importance of fishery resources to fishing communities in order to provide for the sustained participation of these communities in the fishery and, to the extent practicable, minimize adverse economic impacts on these communities. In addition to the national standards, the Magnuson-Stevens Act also requires that new IFQ programs consider providing opportunities for new individuals to enter IFQ fisheries. The Magnuson-Stevens Act defines a fishing community as one that is substantially dependent on, or engaged in, harvesting or processing fishery resources to meet social and economic needs. The definition includes fishing vessel owners, operators, and crew, and U.S. fish processors based in such a community. NMFS guidance further defines fishing community to mean a social or economic group whose members reside in a specific location. At the time of our review, NMFS had implemented three IFQ programs: (1) the Mid-Atlantic surfclam/ocean quahog program in 1990, (2) the South Atlantic wreckfish program in 1992, and (3) the Alaskan halibut and sablefish (black cod) program in 1995. New IFQ programs were being considered in other commercial fisheries, such as the Bering Sea crab; the Gulf of Alaska groundfish (e.g., pollock, cod, and sole); and the Gulf of Mexico red snapper. Under IFQ programs, fishery managers set a maximum, or total allowable catch, in a particular fishery—typically for a year—based on stock assessments and other indicators of biological productivity, and they allocate quota—generally expressed as a percentage of the TAC—to eligible vessels, fishermen, or other recipients, based on initial qualifying criteria, such as catch history. In the United States, fishery councils can raise or lower the TAC annually to reflect changes in the fishery’s health. Fishery managers distribute these changes among the quota holders proportional to their share. For example, a fisherman who received a 5 percent quota share in a fishery with a TAC of 100 metric tons can catch 5 tons of fish. Should the TAC increase from 100 to 200 metric tons in the following year, the quota holder with a 5 percent share would be able to catch 10 tons, or 5 tons more than the previous year. Furthermore, IFQs are generally transferable, meaning that quota holders can buy, sell, lease, or otherwise transfer some or all of their shares, depending on how much or how little they want to participate in the fishery. The nature of the fishing right varies by country. In New Zealand, for example, an IFQ is an exclusive property right that can be held in perpetuity, whereas in the United States, an IFQ represents the privilege to fish a public resource. While this privilege has an indefinite duration, the government may legally revoke it at any time. IFQ programs arose in response to conditions that resulted in a race for fish and overfishing and that reduced economic efficiency, safety, and product quality. For example, before the IFQ program, the Alaskan halibut fishery had limits on the amount of time allowed for commercial fishing in an attempt to keep the annual halibut catch within the TAC, but it did not have limits on the number of boats that could fish. In response, fishermen increased the number of vessels in their fleets and used larger vessels with more gear to catch as much fish as they could in the time allowed. As a result, the halibut season was reduced to a few days. After the IFQ program was implemented, the fishing season was increased to 8 months. Fishermen could choose when to fish and they could use more economical fishing methods, as long as they kept within their quota limits. Individual IFQ programs may differ considerably, depending on the circumstances of the fishery and the objectives of the program. For example, an IFQ program for a fishery where there are concerns about overfishing and the consolidation of power among corporate interests may have different objectives than a program for a fishery where there are concerns about developing the fishery and attracting new fishermen. Depending on the fishery, fishery managers may be willing to trade some potential gains in economic efficiency in exchange for the opportunity to protect fishing communities or facilitate new entry. IFQ programs are largely intended to improve economic efficiency and conserve the resource. According to the theory underlying IFQ programs, unrestricted quota trading promotes economic efficiency, because those willing to pay the highest price for quota would be those expected to use quota the most profitably, by catching fish at a lower cost or transforming the fish into a more valuable product. Over time, unrestricted trading should lead less efficient fishermen to either improve their efficiency or sell their quota. In contrast, restrictions on quota transfers could be expected to reduce the economic benefits that would otherwise be obtained where quota is freely transferable. Another fundamental tenet of this theory is that quota holders will act in ways to promote the stewardship of the resource. Specifically, giving fishermen a long-term interest in the resource is likely to provide incentives to fish in ways that protect the value of their interest. Several methods are available under IFQ programs for protecting the economic viability of fishing communities and facilitating new entry. For protecting communities, the easiest and most direct method is allowing communities to hold quota. Fishery managers may also help protect communities by adopting program rules aimed at protecting certain groups of fishery participants. For facilitating new entry into IFQ fisheries, the methods principally fall into three categories: (1) adopting quota transfer rules that promote new entry, (2) setting aside quota for new entrants, and (3) providing economic assistance to potential new entrants. Concerns have developed in the United States and in other countries about the potential for IFQ programs to harm the economic viability of fishing communities. Many fishery experts and participants are concerned that individual quota holders will sell their quota outside of the fishing community or sell their quota to large companies. If this were to occur, fishing jobs could leave the community and larger companies could consolidate their quota holdings and dominate the fishery. Fishing communities that lose fishing jobs may have few alternative employment options, particularly if they depend primarily on fishing and no other industry replaces fishing. Allowing communities to hold quota is the easiest and most direct way under an IFQ program to help protect fishing communities. According to fishery experts and participants, fishery managers can give each community control over how to use the quota in ways that protect the community’s economic viability, such as selling or leasing quota to fishermen who reside in the community. Community quota could be held by municipalities, regional organizations, or other groups representing the community—unlike traditional individual fishing quota, which is generally held by individual boat owners, fishermen, or fishing firms. Of the three U.S. IFQ programs, only one allows communities to buy and hold quota— the Alaskan halibut and sablefish program. Communities allowed to hold quota can obtain it through allocation when the program begins or at any time thereafter. For example: The North Pacific Fishery Management Council (North Pacific Council) is considering allocating quota to community not-for-profit entities as it develops a proposal for managing the Gulf of Alaska groundfish fishery. New Zealand fishery managers allocated quota to a Chatham Islands community trust several years after the IFQ program was implemented. The trust leases out annual fishing privileges to Chatham Islands-based fishermen to help keep fishing and fishing-related employment in the community. Similarly, fishery managers can incorporate rules into existing IFQ programs or into the design of new programs to allow communities to make quota purchases. For example, in 2002, the North Pacific Council amended the Alaskan halibut and sablefish IFQ program to allow communities along the Gulf of Alaska to purchase quota. The council is considering including a similar provision in the proposed plan to manage the Gulf of Alaska groundfish fishery. In addition to allowing communities to hold quota, fishery managers can establish rules governing who is eligible to hold and trade quota as well as other rules to manage quota as a means of protecting certain groups of fishery participants. Specific rules may vary by program and change over time, depending on which members or groups a council wants to protect. In terms of eligibility to hold quota, for example, the North Pacific Council initially restricted allocations of Alaskan halibut and sablefish quota to individual vessel owners in part to protect the fisheries’ owner-operator fleet. The council later expanded eligibility to allow crew members to hold quota without owning a vessel. We also identified several different types of quota transfer restrictions used in foreign IFQ programs that were aimed at protecting communities. For example: Prohibiting quota sales. While none of the IFQ programs in the United States prohibits the transfer of quota through sales, fishery managers in other countries have done so. For example, Norway’s IFQ program prohibited all quota sales to protect fishing communities in certain locations. Alternatively, prohibitions could be used temporarily to help prevent fishermen from hastily selling their quota. For example, according to New Zealand fishermen we spoke with, many small boat fishermen did not initially understand the long-term value of their quota and therefore sold their quota shortly after the initial allocation. To remedy this situation, they suggested that fishery managers could prohibit sales for the first year after a program’s initial allocation to give fishermen time to make informed decisions about whether to sell their quota. Placing geographic restrictions on quota transfers. Iceland and New Zealand fishery managers have also set limits on where quota can be sold or leased to protect certain groups, such as local fishermen and the communities themselves. The Icelandic IFQ program, in which individuals own vessels with associated quota rather than the quota itself, adopted a “community right of first refusal” rule to provide communities the opportunity to buy vessels with their quota before the vessels are sold to anyone outside of the community. IFQ programs can also regulate quota leasing to keep fishing in a certain area by establishing rules that limit leasing or fishing to residents of the community. In terms of leases, New Zealand’s Chatham Islands community trust has, in effect, used residence in the Chatham Islands as a requirement to lease its quota. Limiting quota leasing. Iceland requires that all quota holders fish at least 50 percent of their quota every other year and prohibits quota holders from leasing more than 50 percent of their quota each year. Fishery managers introduced such restrictions, in part, to minimize the number of “absentee” quota holders—those who hold quota as a financial asset but do not fish. Finally, according to fishery managers and experts we spoke with, fishery managers can help protect fishing communities by (1) setting limits on quota accumulation, (2) establishing separate quota for different sectors of the fishery, (3) requiring quota holders to be on their vessels when fish are caught and brought into port, and (4) restricting the ports to which quota fish can be landed. Setting limits on quota accumulation. Fishery managers can place limits on the total amount of quota an individual can accumulate or hold to protect certain fishery participants. In the United States, for example, the North Pacific Council set limits on individual halibut quota holdings that range from 0.5 percent to 1.5 percent, depending on the fishing area, as a means of protecting the fishery’s owner-operator fleet. Establishing separate quota for different sectors of the fishery. To protect small boat fishermen and local fishing jobs, Iceland developed a separate quota for small vessels and large vessels and prohibited owners of small vessels from selling their quota to owners of large vessels. In the U.S. halibut and sablefish IFQ program, the North Pacific Council established separate quota categories based on vessel type and length and placed certain restrictions on transfers among these categories to ensure that quota would be available to owners of smaller vessels. Requiring quota holders to be on their vessels. Some programs require the owner of the quota to be on board when fish are caught and brought into port. For example, the North Pacific Council requires fishermen who entered the Alaskan halibut and sablefish IFQ program by purchasing certain categories of quota, rather than receiving it as part of the initial allocation, to abide by this rule. The rule was designed in part to limit speculative quota trading by individuals who are primarily interested in quota as a financial asset and not otherwise invested in the fishery. Restricting landings. Fishery managers could restrict the ports to which quota holders or those who lease quota can deliver their catch. For example, New Zealand’s Chatham Islands trust leases rock lobster quota to local fishermen who must then land their catch in the Chatham Islands. IFQ programs have also raised concerns about opportunities for new entry. As IFQ programs move toward achieving one of their primary goals of reducing overcapitalization, the number of participants decreases and consolidation occurs, generally reducing quota availability and increasing price. As a result, it is harder for new fishermen to enter the fishery, especially fishermen of limited means, such as owners of smaller boats or young fishermen who are just beginning their fishing careers. According to New Zealand officials, quota prices increased dramatically. For example, the average price of abalone quota increased by more than 50 percent in the first 6 months of trading—from about NZ$11,000 to NZ$17,000 per metric ton—and, by 2003, the average price had reached about NZ$300,000 per metric ton, or about 27 times the price at the start of abalone quota trading in 1988. To reduce the barriers to new entry, fishery managers have established quota transfer rules and set-asides, and/or provided economic assistance, such as loans or grants. In terms of transfer rules, all domestic and most foreign IFQ programs allow quota to be sold or leased. Allowing such transfers provides the opportunity for new entry to those who can find and afford to buy or lease quota. Since the lease price is generally below the sales price, leasing quota may help make entry more affordable to fishermen of limited means, such as small boat fishermen. Fishery managers can also make quota available and more affordable to new entrants by “blocking” small amounts of quota and limiting the number of “blocks” that any one individual or entity can hold. For example, the North Pacific Council set up two types of halibut quota at the initial allocation—unblocked and blocked. Unblocked quota holds no restrictions. Blocked quota, on the other hand, is an amount of quota that yielded less than 20,000 pounds of halibut in 1994 and can only be bought or transferred in its entirety. An individual or entity can hold unblocked quota and one quota block; an individual who holds no unblocked quota can hold two quota blocks. A state of Alaska study found that estimated prices for blocked quota were less per pound than for unblocked quota over the first 4 years of the Alaskan halibut and sablefish IFQ program and that estimated prices for smaller blocks were less per pound than for larger blocks. Setting aside a portion of the total quota specifically for new entrants can also make quota available. Quota could be set aside at the time of the initial allocation for future distribution to entities that did not initially qualify for quota. For example, at the start of the Alaskan halibut and sablefish program, the North Pacific Council set aside a portion of the TAC for allocation to communities in western Alaska for community development purposes. According to fishery managers, similar set-asides could be used for new entrants by establishing the set-aside at the start of the IFQ program, or by buying or reclaiming, rolling over, or setting aside quota during the program. Buying or reclaiming quota from existing quota holders. Fishery managers could buy back quota from existing quota holders. For example, the New Zealand government bought back quota to give to the indigenous Maori tribes in partial settlement of their claims against the government over fishing rights. Fishery managers could also obtain quota forfeited by fishermen who have not complied with program rules; in the New Zealand IFQ system, for example, quota holders risk forfeiting their quota holdings if they catch more fish than they have quota for. Issuing quota for a fixed period of time and then rolling it over for distribution to new entrants. Depending on the program, the frequency of the rollover could range from every few years to annually and the amount of the rollover could range from some to all of the quota. For example, a rollover system has been proposed for Australia’s New South Wales fishery under which fishery managers would issue quota for a finite period of time (e.g., 30 years) under one set of program rules and, periodically (e.g., every 10 years), quota holders would have the opportunity to choose whether to continue to participate in the old system or move their quota into a new system with different rules for another 30 years. Setting aside TAC increases for distribution to new entrants. Foreign and domestic IFQ programs generally define an individual fishing quota as a percentage of the overall TAC and distribute any changes in the TAC among existing quota holders proportional to their share. Alternatively, fishery managers could distribute TAC increases to new entrants, leaving existing quota holders fishing the same amount of fish as they did in the previous year. Once fishery managers have set aside quota, they must devise a method for allowing new entrants to obtain it. According to fishery experts, the options include: Selling quota at auction. Fishery managers could auction off quota to the highest bidder and keep the proceeds. Alternatively, the managers could serve as an intermediary by auctioning off quota on behalf of existing quota holders, and the seller would incur all losses or gains. In case the auction price becomes prohibitive for new entrants, fishery managers could set aside quota that could be sold at a lower, predetermined price. Economists generally support the idea of auctioning quota because an efficient market provides quota to its most profitable users. However, in the United States, the Magnuson-Stevens Act limits the amount of fees that may be charged under an IFQ program, which may effectively preclude the use of auctions. Distributing quota by lottery. New entrants could be randomly selected from a pool of potential entrants, giving persons of limited means an equal chance to obtain quota. Lotteries might be especially advantageous when the demand for quota from new entrants is greater than the supply of quota set aside. Distributing quota to individuals who meet certain criteria. Fishery managers could allocate quota to new entrants using a point system based on criteria such as fishing experience or completion of an apprenticeship program. Finally, to help make quota affordable, fishery managers and experts told us that government entities could provide loans or subsidies to potential entrants who might not otherwise be able to afford the quota. Affordability is particularly an issue as an IFQ program becomes more successful and the value of the quota increases. Loans. The Magnuson-Stevens Act allows NMFS to offer loans. Under this provision, for example, NMFS has established a low-interest loan program for new entrants and fishermen who fish from small boats in the halibut and sablefish fisheries off Alaska. The fishermen can use these loans to purchase or refinance quota. Since the program’s inception in fiscal year 1998, Alaska has approved 207 loans, totaling nearly $25 million. The Magnuson-Stevens Act also provides for the creation of a central registry where owners and lenders can register title to, and security interests (such as liens) in, IFQs. According to the National Research Council, a registry would increase lender confidence and provide opportunities for individuals to obtain financing to enter IFQ fisheries. Although NMFS has not yet established this registry, its Alaska Region maintains a voluntary registry where creditors, such as private banks, the state of Alaska, and private lenders can record liens against quota shares. The Alaska Region reported that most lending institutions take advantage of this service. The registry contained 2,581 reported interests in quota share at the end of 2002. Grants or other subsidies. Grants or other subsidies could decrease the costs associated with buying or leasing quota. Since grants do not have to be repaid, they could give fishermen of limited means the opportunity to enter the fishery and then build their capital in order to increase their quota holdings. In addition to grants, fishery managers could establish a “lease-to-own” quota program—new entrants would pay for the quota while using it. Also, quota could be made available for purchase or lease at below market prices. Iceland, for example, is considering adopting a discount program to make quota more affordable. This discounting scheme would allow crews of small vessels to purchase quota from the government at 80 percent of its market value. In considering methods to protect communities and facilitate new entry into IFQ fisheries, fishery managers face issues about efficiency, fairness, and design and implementation. Community protection and new entry methods are designed to achieve social objectives, but achieving these objectives may undermine economic efficiency, one of the primary benefits of an IFQ program, and raise questions of equity. Moreover, community protection and new entry methods present a number of design and implementation challenges. However, given the particular circumstances of the fishery and the goals of the IFQ program overall, it is unlikely that any single method can protect every type of fishing community or facilitate new entry into every IFQ fishery. It is also unclear how beneficial these protective methods can be. Fishery managers face an inherent tension between the economic goal of maximizing efficiency and the social goal of protecting communities or facilitating new entry. According to fishery experts we spoke with, this tension occurs because a community or new entrant often may not be the most efficient user of quota. For example, according to Icelandic fishery experts, some communities did not manage their quota effectively and sold it, reducing the communities’ economic base. In addition, setting aside quota for new entrants may not be the most efficient use of quota because experienced fishermen or fishing firms are generally able to fish the quota more economically than a new entrant. Adopting rules that constrain the free trade of quota, such as those designed to protect communities or facilitate new entry, would likely limit the efficiency gains of the IFQ program. Therefore, fishery managers have to decide how much economic efficiency they are willing to sacrifice to protect communities or facilitate new entry. Methods to protect communities or facilitate new entry may also raise concerns about equity. In the United States, certain community quotas or rules aimed at protecting certain groups may not be approved because they are not allowed under the Magnuson-Stevens Act. For example, National Standard 4 of the Magnuson-Stevens Act prohibits differential treatment of states. A rule that proposes using residence in one state as a criterion for receiving quota may violate the requirements of National Standard 4. Furthermore, methods that propose allocating quota to communities or adopting rules aimed at making quota more available or affordable to a certain group of fishermen can appear unfair to those who did not benefit and could result in legal challenges. Moreover, allowing communities to purchase quota may be considered unfair or inequitable, because relatively wealthy communities would more readily have the funds needed to purchase quota while relatively poor communities would not. Fishery managers face multiple challenges in designing and implementing community protection and new entry methods, according to fishery managers and experts we spoke with. The resolution of these issues depends on the fishery’s circumstances and the program’s objectives. It is unlikely that any single method can protect every kind of fishing community or facilitate new entry into every IFQ fishery. In developing an approach to protect fishing communities, fishery managers have to define community, determine who represents it, and define economic viability, and communities must determine how to use the quota. Defining community can be challenging because communities can be defined in many ways. As discussed earlier, the Magnuson-Stevens Act defines a fishing community as one that substantially depends on, or is engaged in, harvesting or processing fishery resources to meet social and economic needs. NMFS guidance further defines fishing community geographically—that is, a social or economic group whose members reside in a specific location. Fishery managers and experts told us that communities with geographically distinct boundaries are easier to define, such as island communities or remote communities in Alaska. However, some communities are difficult to define when, for example, some of the fishermen live away from the areas they fish, as is the case for many halibut fishermen who reside in other states and fish in the waters off the coast of Alaska. Moreover, communities can also be defined in nongeographic ways, such as fishermen who use the same type of fishing gear (e.g., hook-and-line or nets) for a particular species or people and businesses involved in a fishery regardless of location. These communities can include fishermen and fish processors, as well as support services such as boat repair businesses, cold storage facilities, and fuel providers. Once fishery managers define the community, they must then determine who represents the community and thus who will decide how the quota is used. More than one organization (e.g., government entity, not-for-profit organization, private business, or cooperative group) may claim to represent the interests of the community as a whole. For example, rural coastal communities in Alaska, which are geographically distinct, could have several overlapping jurisdictions, including a local native corporation, a local municipality, and a local borough. Determining who represents the community is more difficult in communities without geographically distinct boundaries. Fishery managers also need to define what constitutes economic viability, which is likely to differ by community because the fishery has different economic significance in each community. Some communities primarily rely on fishing and fishing-related businesses, while others may have a more diverse economic base. (See fig. 1.) Consequently, it may be unclear what type of protection a community needs to ensure its economic viability. Fishery experts we spoke with agreed that few communities in the United States primarily depend on fishing as their economic base. Moreover, the balance of industries making up a community’s economy may change over time when, for example, the area becomes more modernized or a new industry enters. For example, the economy of the Shetland Islands changed dramatically with the development of the oil industry off the Shetland Islands in the 1970s. This development resulted in jobs and settlement funds that the community used to enhance its economic base through community development projects. Finally, communities have to decide whether to keep their quota, sell it, or lease it to others. If they keep their quota, they also have to decide how to allocate it. Similarly, if they sell or lease their quota, they have to decide how to allocate the proceeds. Unless communities can decide how to allocate quota or the proceeds, the community quota may go unused and thus prevent the community from receiving its benefit. For example, the quota New Zealand’s Maori people received from the government in 1992 has not been fully allocated to the Maori tribes, largely because the commission responsible for distributing the quota and the tribes could not agree on the allocation formula. Along with these definitional challenges, fishery managers and communities have to address other design and implementation issues, such as whether to establish prohibitions on quota sales or geographic restrictions on quota transfers. Prohibitions on quota sales. Prohibiting quota sales may not allow fishing communities or businesses to change over time as the fishing industry changes. According to fishery experts we spoke with, rules that prevent change essentially freeze fishing communities at one point in time and may create “museum pieces.” For example, prohibitions on quota sales prevent the fishery from restructuring, thus forcing less efficient quota holders and fishing businesses to remain in the fishery. Consequently, prohibitions on quota sales may actually undermine the economic viability of the fishing communities they were designed to protect. In addition, prohibitions on quota sales might run counter to an IFQ program’s overall objective of reducing excess investment in the fishery because such prohibitions act to prevent fishermen from selling some of their boats or leaving the fishery. Geographic restrictions on quota transfers. Protecting communities by imposing geographic restrictions on quota transfers also raises issues that must be considered and addressed. According to fishery experts we spoke with, rules that give communities the right to purchase quota before it is sold outside the community might be legally avoided. For example, Icelandic officials told us that in their IFQ program, where individuals own vessels with associated quota rather than the quota itself, companies holding quota easily avoided the “community right of first refusal” rule by selling their companies as a whole to an outside company, rather than just selling their vessels and associated quota. As a result, communities could not use this rule to prevent the sale. Furthermore, communities that could benefit from such a rule may not have the money to purchase the quota, while those communities that can afford to purchase the quota may not need the rule’s protection. Other program rules aimed at protecting the community also raise implementation issues that fishery managers must consider: Accumulation limits. The challenge in setting accumulation limits—the amount of quota that any one individual or entity can hold—is to set limits that are high enough to promote economic efficiency and low enough to prevent any one individual or entity from holding an excessive share. According to New Zealand fishery managers and experts, for example, accumulation limits were set at between 10 and 35 percent, depending on the species, in order to allow individuals to acquire enough quota to be efficient and competitive while also stemming overcapacity and overfishing in the inshore fisheries. Furthermore, as quota becomes more valuable, managers may face pressure from existing quota holders to raise or eliminate the limits on accumulation. In Iceland, for example, fishery managers recently increased accumulation limits from 8 percent to 12.5 percent of the total quota because of such pressure. In cases where both communities and individuals hold quota, fishery managers may want to set different limits for communities and individuals. Even after managers set accumulation limits, monitoring and enforcing these limits could be more difficult when fishermen create subsidiaries and complicated business relationships that enable them to catch more than the quota limit for an individual quota holder. To mitigate this problem, the Alaskan halibut and sablefish program, for example, requires all quota transfer applicants to identify whether they are individuals or business entities, and requires all business entities to annually report their ownership interests. NMFS uses this information to ensure that no halibut and sablefish quota holdings, whether individually or collectively, exceed the accumulation limits. Owner-on-board requirements. According to fishery experts we spoke with, requiring quota holders to be onboard their vessels could be impractical, especially for small businesses where the same person would have to be on board at all times. Furthermore, such a rule would require so many exceptions, such as for emergencies and illness, that it could become meaningless. Requirements to bring catch into ports in a particular geographic area. These requirements may not be healthy for a community’s economy in the long term. For example, such a requirement may subsidize inefficient local fish processors that cannot compete on the open market. With reduced competition, these processors may offer less money for the catch, thus reducing the fishermen’s income and ultimately harming the community. According to Shetland Islands fishery managers we spoke with, had fishermen been required to land their catch in the Shetland Islands, they would have been forced to sell their catch at a price far below the market value and the processor would have had no incentive to restructure into the competitive business it is today. Leasing provisions. According to some fishery managers and experts, leasing reduces stewardship incentives, which may impact the community’s long-term economic viability. Quota leasing separates the person holding the quota from the person fishing the quota. In some cases, quota leasing may diminish stewardship incentives by creating a class of absentee quota holders who rely on independent fishermen. While owner- on-board rules, such as those in Alaska, may minimize the risk of creating this class of absentee quota holders, fishermen who lease quota have only a temporary privilege to catch fish. Thus, they have less interest in the long-term health of the fishery, especially as the end of their lease term approaches. Consequently, incentives may exist to catch more fish than their quota allows and sell this over-quota fish on the black market or to fish using nonsustainable methods. For example, according to New Zealand fishery experts, quota holders in the high-value abalone fishery found that unskilled fishermen who leased quota were jeopardizing the fish by extracting them in ways that harmed the abalone beds. Given the issues raised by quota transfer and other program rules, as well as the potential loss of economic efficiency resulting from these rules, some fishery managers and experts view freely transferable quota as being the best way to maintain economically viable communities and therefore place few or no restrictions on quota sales or leases. For example, New Zealand allows free trade in quota on the theory that free trade is needed to maximize returns from the fishery and enhance stewardship of the resource. Similarly, the surfclam/ocean quahog IFQ program has relatively few restrictions on quota transfers. As with community protection methods, new entry methods also present a variety of design and implementation challenges to fishery managers. Allowing quota to be transferred through sales or leases provides the opportunity for new entry but quota prices may increase over time, making quota less affordable. In the New Zealand IFQ program, for example, the average price per metric ton of rock lobster quota in one management area skyrocketed from NZ$23,265 to NZ$222,500 over an 8- year period. While leasing helps make quota available at prices lower than the sales price, the lease price may still be unaffordable or unprofitable to fish and thus not practical for new entrants. For example, according to New Zealand fishing industry representatives, the lease price for rock lobster in 2003 was about NZ$22.50 per kilo, but fishermen needed to sell the fish for at least NZ$30 per kilo to cover their costs. To minimize the risk associated with leasing, the Shetland Islands community quota program levied fees based on the sales revenue from the quota fished, rather than setting a fixed lease price that fishermen would have to pay, regardless of the amount of quota fish caught. Set-asides to make quota available for new entrants also raise challenges, according to fishery experts. In setting aside quota for new entrants, fishery managers have to decide how much quota to reserve and who would be eligible to receive it, such as owners of small boats or young fishermen. If a set-aside occurs when a program is first established, managers do not have to take quota away from existing quota holders. However, there are many challenges associated with setting aside quota after a program is implemented. Buying back quota. Buying back quota may not be possible because the government may not find quota holders willing to sell their quota. For example, New Zealand funded a buyback program to obtain quota as part of its settlement with the Maori tribes. However, the government was not able to obtain the amount of quota it was seeking, and, as a result, had to give the tribes money in place of some of the quota. Issuing quota for a fixed period of time. Issuing quota with expiration dates could make it less likely that fishermen would accept the IFQ system or make investments in efficiency. Fishermen could also find it difficult to invest in boats and gear because banks may be less willing to lend money and fishermen may be less willing to borrow. Furthermore, as with leasing, stewardship incentives could decline as the quota expiration date draws near. Setting aside TAC increases. Replenishing quota by using TAC increases might not always be feasible because quota would not be available to reserve as a set-aside when the TAC remains the same or declines. Setting aside TAC increases would also dilute the interests of existing quota holders, who would hold a smaller percentage of the TAC. Fishery managers also face challenges in deciding which new entrants would be eligible to receive quota from the set-aside. If fishery managers decide to auction quota to the highest bidder, they cannot be assured that quota would be affordable to new entrants. Fishery managers could auction the quota in small amounts, which would make the quota more affordable and thereby open up opportunities to new entrants. However, the value of the quota would decrease to reflect the inherent inefficiency of this distribution mechanism. In addition, while lotteries could provide potential entrants an equal chance to obtain quota and resolve some of the equity issues raised by auctions, they would also create more uncertainty for existing quota holders. Current quota holders would no longer have control over quota purchases and would have to depend on the luck of the draw. This uncertainty is a disincentive to invest in boats or gear. Economic assistance methods are designed to provide new entrants with the capital needed to purchase quota and are the most direct method of helping new entrants. However, they raise the following concerns, according to fishery experts we spoke with: The financial assistance may not be sufficient for a potential new entrant to enter the fishery or buy enough quota to earn a living. Providing economic assistance could contribute to an increased demand for quota and further price increases, thereby defeating the primary purpose of trying to make quota more affordable. Government entities may not be willing or able to fund economic assistance programs. Fishery managers have not conducted comprehensive evaluations of how IFQ programs protect communities or facilitate new entry, because few IFQ programs were designed with community protection or new entry as objectives. This lack of information, combined with the concerns about economic efficiency and fairness, makes it more difficult to decide which community protection and new entry methods to use. In order to determine whether the chosen methods are working or how they should be improved, fishery managers would have to clearly define community protection or new entry as an objective, identify data that isolate the impact of community protection and new entry methods, collect these data before implementing the program—baseline data—and compare these data with data collected over the course of the program. This effort would then allow managers to determine whether their community protection or new entry methods are accomplishing their objectives and whether they need adjustments to promote effectiveness or respond to any unintended consequences. Under the Magnuson-Stevens Act, fishery managers are required to analyze the social and economic conditions of the fishery in developing fishery management plans. These data could be used as a baseline for the social and economic conditions in a fishing community. In addition to baseline data, fishery managers need to collect data once the IFQ program is established. For example, some fishery experts told us that many fishing communities in Iceland collapsed when quota was sold and left the community. However, other fishery experts and Icelandic officials said that these communities would have collapsed regardless of the IFQ, in part, due to the lack of educational and employment opportunities and the movement of people to Reykjavik, the capital, as the country modernized during this time period. This difference in opinion exists partly because Iceland did not collect the data needed to determine whether the IFQ program, or other factors, led to the communities’ demise. Recognizing the need for additional information, Alaskan fishery managers will collect data each year on the amount of halibut and sablefish quota held in each community to help assess the effectiveness of its recent amendment allowing communities to purchase quota. Similar issues arise in trying to collect data that distinguishes new entrants from existing quota holders. Without the data to clearly understand the changes occurring in a fishery or community, fishery managers cannot effectively modify their community protection or new entry methods. During the moratorium on new IFQ programs in the United States, two fishery cooperatives, among others, emerged as an alternative fishery management approach—the Whiting Conservation Cooperative and the Pollock Conservation Cooperative. (See app. III for a description of each cooperative.) These cooperatives are voluntary contractual agreements among fishermen to apportion shares of the catch among themselves. In comparing the key features of IFQ programs and these U.S. fishery cooperatives, we identified the advantages and disadvantages of each approach in key areas. Given these differences, an IFQ program combined with some characteristics of a cooperative, such as provisions of New Zealand’s cooperative-like stakeholder organizations, may be beneficial. While both IFQ programs and fishery cooperatives can vary widely, the general characteristics of IFQ programs and fishery cooperatives differ in the areas of regulatory and management framework, number of participants, quota allocation and transfer, and monitoring and enforcement. (See table 1.) With respect to their regulatory and management framework and number of participants, IFQ programs generally have greater stability, take longer to establish, and manage larger numbers of participants than cooperatives. IFQ programs have greater stability than fishery cooperatives because they are established and terminated by federal regulations, while cooperatives are established and terminated by voluntary contractual agreements. IFQ programs generally take longer to establish than fishery cooperatives because of the fishery management council process. Fishery councils must review the IFQ proposal, develop alternatives and options, and analyze their potential social and economic effects before submitting the proposal to the Secretary of Commerce for approval. While the secretary is reviewing the proposal, NMFS must publish draft regulations for public comment before the secretary makes a final decision and the regulations are implemented. This process can be quite lengthy; for example, it took 3 years for the North Pacific Council to review, analyze, and adopt the proposed Alaskan halibut and sablefish IFQ program and another 3 years to implement the program. In comparison, because fishery cooperatives are voluntary, agreements can be reached within a shorter period of time. For example, the contract to form the whiting cooperative was negotiated in less than a day. Finally, IFQ programs can manage larger numbers of diverse participants. At the end of 2002, for example, the Alaskan halibut and sablefish IFQ program had about 3,500 participants, ranging from crewmembers on small boats to owners of large freezer vessels. In contrast, according to fishery experts, fishery cooperatives work better with fewer and relatively homogeneous participants because it is difficult for members to reach agreement where there are many participants with diverse interests. For example, the whiting cooperative has four participants and the pollock cooperative has eight participants. In both cooperatives, the participants are large harvesting and processing companies that own catcher-processor vessels. With respect to allocating and transferring fishing privileges, IFQ programs provide greater transparency than fishery cooperatives. Under an IFQ program, NMFS uses widely published criteria established by fishery councils to allocate quota to individual entities, such as individual fishermen or fishing firms. Under a fishery cooperative, NMFS allocates quota to the cooperative, which, through negotiated contract, distributes the quota among its members. For example, the four companies that operated catcher-processor vessels in the Pacific whiting fishery negotiated a private contract to divide up the sector’s quota using catch history, vessel capacity, and number of vessels. When quota can be transferred, IFQ programs are less exclusive than cooperatives, because they provide entry opportunities for fishermen who can find and afford to buy or lease quota. In comparison, cooperatives are exclusive contractual arrangements where quota is transferred among the members, and potential entrants may have difficulty entering the cooperative. Finally, regarding monitoring and enforcement, IFQ programs are viewed as being more difficult for NMFS to administer than fishery cooperatives, because NMFS must monitor individual participants for compliance with program rules, such as quota accumulation and catch limits. In contrast, cooperatives are viewed as being simpler for NMFS to monitor and enforce, because NMFS monitors one entity—the cooperative—and the cooperative is responsible for monitoring the actions of its members. For some fisheries, establishing a cooperative of quota holders within the overall framework of an IFQ program to help manage fishing may maximize the benefits of IFQ programs and fishery cooperatives while minimizing their downsides. Some of the benefits of a combined IFQ/cooperative approach are illustrated in the examples below, where groups of New Zealand quota holders formed cooperative-like organizations to help manage their fisheries, such as abalone, hoki, orange roughy, scallops, and rock lobster. With respect to regulatory and management framework and number of participants, a cooperative of IFQ holders offers the following advantages: A combined approach provides the stability of an IFQ program. Because the IFQ program is set by regulations, it will remain in place even if the cooperative dissolves. Also, should the cooperative fail to perform, its management authority and responsibilities would revert to the government. For example, according to New Zealand fishery managers we spoke with, the Challenger Scallop Enhancement Company (Scallop Company) has managed the scallop fisheries effectively, but should it fail to perform, its responsibilities would return to the government. A combined approach can provide a way for large numbers of participants to organize into smaller groups to help manage their fisheries collectively. For example, New Zealand’s rock lobster IFQ quota holders formed nine regional cooperative groups under the umbrella of the New Zealand Rock Lobster Industry Council. The council and the regional groups provide advice on management of rock lobster fisheries. A combined approach can provide the opportunity for fishery participants to pool information, assess stocks, achieve economies of scale in production and try other forms of cooperation. For example, a cooperative of quota holders could decide to pool their quota and fish in more economical ways, such as having only certain members fish and then distributing the proceeds among all members. Similarly, a cooperative of quota holders could agree to stop fishing in certain areas or leave some of the quota unfished to protect the resource. In New Zealand, for example, abalone quota holders agreed not to fish some of their quota, because they believed that the TAC had been set too high. In terms of allocating and transferring fishing privileges, a combined approach offers the following advantages: Under a combined approach, the fishery council, rather than the cooperative, could make the difficult and often contentious decisions regarding who can hold quota and how much quota an individual receives. A combined approach would also provide transparency, because the IFQ program’s quota allocation and transfer rules could be used to allocate quota to members of the cooperative. Fishery managers could reduce the exclusivity of a cooperative by requiring that the cooperative give each new quota holder the opportunity to join. For example, membership in New Zealand’s stakeholder organizations is open to any entity that holds quota in the particular fishery. Moreover, quota allocations are not lost if a cooperative of quota holders dissolves, because each member retains the quota allocated under the IFQ program. In terms of monitoring and enforcement, under a combined approach, the government could give some management responsibilities to the cooperative, such as monitoring the actions of individual members for compliance with certain program rules. New Zealand officials told us that their government reduced its monitoring costs for its scallop fisheries because the Scallop Company now performs this function. Because of the size and common interests of cooperatives, members often create peer pressure to conform to program rules. Self-regulation might also decrease overall enforcement costs. Finally, a combined approach would provide the enforcement mechanisms of an IFQ program should self-regulation fail and/or should the cooperative fail to perform its other management responsibilities. New Zealand, for example, devolved most IFQ management responsibilities to the Scallop Company, but the government has not lost its management authority. No method will protect communities or facilitate new entry if the fishery collapses. While an IFQ is a fishery management tool put in place to protect the resource, as well as reduce overcapacity, these laudable goals may have unintended consequences: the loss of communities historically engaged in or reliant on fishing and reduced participation opportunities for entry-level fishermen or fishermen who did not qualify for quota under the initial allocation. New IFQ programs or modifications to existing programs may be designed to address these problems by incorporating community protection and new entry goals. However, because the goals of community protection and new entry run counter to the economic efficiency goals, fishery councils face a delicate balancing act to achieve all goals. It is therefore critically important for fishery councils to tailor IFQ programs to achieve efficiency and conservation as well as social objectives. However, without collecting and analyzing data on the effectiveness of the approaches used, fishery councils will not know if the program is meeting its intended goals and if mid-course adjustments need to be made. To protect fishing communities and facilitate new entry into new or existing IFQ fisheries, we recommend that the Director of the National Marine Fisheries Service ensure that regional fishery management councils that are designing community protection and new entry methods take the following three actions: Develop clearly defined and measurable community protection and new entry objectives. Build performance measures into the design of the IFQ program. Monitor progress in meeting the community protection and new entry objectives. We provided a draft of this report to the Department of Commerce for review and comment. We received a written response from the Under Secretary of Commerce for Oceans and Atmosphere that included comments from the National Oceanic and Atmospheric Administration (NOAA). NOAA stated that our report was a fair and thorough assessment of community protection and new entry issues in IFQ programs. NOAA generally agreed with the report’s accuracy and conclusions and agreed with the substance of the report’s recommendations. NOAA’s comments and our detailed responses are presented in appendix IV of this report. NOAA indicated that it currently does not have the authority to direct the councils to adopt the report’s recommendations, because it cannot direct councils to take actions that are not mandated by the Magnuson-Stevens Act. We have revised our recommendations accordingly. However, NOAA agreed with our recommendation to develop clearly defined and measurable community protection and new entry objectives. NOAA noted that clearly defined and measurable objectives are often hard to identify, objectives may vary by IFQ program, and measurable objectives require data that are not always available or regularly collected. Nonetheless, it recognized that management objectives are important and should be used as much as possible as yardsticks in developing IFQ programs. NOAA agreed with our recommendation to build performance measures into the design of the IFQ program, noting the importance of selecting feasible and appropriate performance measures. Finally, NOAA agreed with our recommendation to monitor progress in meeting the community protection and new entry objectives. NOAA wrote that provisions for the monitoring and review of new IFQ program operations are addressed in the administration’s Magnuson-Stevens Act reauthorization proposal. NOAA also provided technical comments that we incorporated in the report as appropriate. We are sending copies of this report to the Secretary of Commerce and the Director of the National Marine Fisheries Service. We will also provide copies to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841 or Keith Oleson at (415) 904-2218. Key contributors to this report are listed in appendix V. This is the second in a series of reports on individual fishing quota (IFQ) programs. For this report, we reviewed foreign and domestic quota programs and fishery cooperatives to determine (1) the methods available for protecting the economic viability of fishing communities and facilitating new entry into IFQ fisheries, (2) the key issues raised by community protection and new entry methods, and (3) the comparative advantages and disadvantages of the IFQ system and the fishery cooperative approach. For all three objectives, we visited Iceland, New Zealand, Scotland’s Shetland Islands, and Alaska and Maine in the United States, where we interviewed fishery management officials, quota program participants, researchers, and industry and community representatives and visited fishing communities. We also visited the fishing communities of Kodiak and Old Harbor, Alaska; and Jonesport, Portland, Stonington, and Vinalhaven, Maine. In these communities, we interviewed fishery participants, local government officials, and community representatives, and visited fishing and fishing-related businesses. We selected these countries and U.S. fishing communities in accordance with suggestions from program managers and industry experts to obtain coverage of a range of quota-based programs and fishing communities. We also reviewed the literature on IFQ and other quota-based programs and fishery cooperatives. To determine the methods available for protecting the economic viability of fishing communities and facilitating new entry into IFQ fisheries and the potential limitations of each method, we identified foreign and domestic programs with community protection or new entry provisions. We interviewed and obtained the views of foreign and domestic fishery management officials, program participants, researchers, and industry and community representatives on methods that are being used or could be used to protect communities and facilitate new entry, as well as the potential benefits and limitations of each method. We also searched for, but could not find, any studies and assessments of the extent to which each program has met its community protection or new entry objectives. To determine the comparative advantages and disadvantages of the IFQ system and the fishery cooperative approach, we identified and reviewed fishery management plans, laws, and regulations related to existing IFQ and fishery cooperative programs. We also reviewed and analyzed studies and assessments of these programs and interviewed foreign and domestic fishery management officials, researchers, and industry representatives on the comparative benefits and downsides of each approach. We conducted our review from February through October 2003 in accordance with generally accepted government auditing standards. This appendix describes IFQ programs in Iceland, New Zealand, and Scotland’s Shetland Islands, as well as the U.S. Mid-Atlantic surfclam/ocean quahog IFQ program and the U.S. Alaskan halibut and sablefish IFQ program. The term individual fishing quota as used in this report includes individual transferable quota (ITQ) and individual vessel quota (IVQ). Iceland’s economy depends heavily on the fishing industry, which provides 70 percent of export earnings and employs 12 percent of the work force. Iceland excluded foreign fishermen from its waters in the 1970s, when it introduced its exclusive economic zone. Nevertheless, cod, Iceland’s main commercial fish stock, had collapsed and other essential stocks were reported to be near collapse by the 1980s. In 1984, Iceland introduced individual fishing quotas for its major fisheries. Fishermen indirectly hold quota in Iceland because Iceland’s individual fishing quotas are linked to fishing vessels rather than persons. In 1990, Iceland allowed quota to be sold and leased, transforming IFQs into individual transferable fishing quota. According to fishery experts and managers, the fish in Iceland are property of the Icelandic people rather than individual quota holders. As such, quota allocations are indefinite in duration and could be revoked by the Icelandic Parliament at any time. While not explicitly designed with such objectives, Iceland’s IFQ program used the following provisions to protect communities and encourage new entry: Community right of first refusal. This rule provides communities with the right to veto the transfer of fishing vessels and associated quota to someone outside of the community. To stop the sale, the community must purchase the vessel at the market rate. Emergency community quota allocations. Iceland allocates small blocks of quota to communities hurt by the transfer of quota from their area. Separate quota markets for large and small vessels. To help protect small vessels, Iceland divided its IFQ system into two quota markets—one for large vessels and another for small vessels. Quota allocated to small vessels cannot be transferred to large vessels, and quota allocated to large vessels cannot be transferred to small vessels. Also, small-vessel fishermen can choose to fish a pre-set number of fishing days (days-at- sea), instead of participating in the IFQ system. Seafood is New Zealand’s fourth largest export, after dairy, meat, and forestry. In 2000, seafood exports were worth about NZ$1.43 billion and accounted for 90 percent of industry revenue. New Zealand introduced individual fishing quotas in 1986 for some of the most economically significant species to prevent overfishing in the inshore fisheries while developing the unexplored deepwater fisheries. Under the resulting quota management system, New Zealand manages about 50 species, such as hoki, orange roughy, and scallops. New Zealand’s IFQ fish accounted for about 95 percent of the fishing industry’s value in 2003. New Zealand’s system allows fishermen to buy or sell quota, as well as lease quota on an annual basis. Fishery managers initially established quota accumulation limits for the inshore and deepwater fisheries. Furthermore, the allocation of quota changed from weight to a percentage of the total allowable commercial catch in 1990. According to New Zealand fishery managers, community protection was not an objective of the quota management system, and New Zealand has few fishing-dependent communities. However, the New Zealand government allocated quota to the indigenous Maori tribes as part of the settlement agreements resolving claims of ownership of the fisheries under the Treaty of Waitangi Fisheries Commission. The commission is leasing quota to fishermen while it develops a formula to distribute quota to the Maori. Key barriers to reaching agreement on this distribution formula include identifying membership in tribes and agreeing on how much quota each tribe should receive. In recent years, groups of quota holders have joined together in cooperative-like organizations to help manage some of the fish stocks under the quota management system. This co-management by government and industry has led to the formation of key stakeholder groups in fisheries such as hoki, orange roughy, rock lobster, and scallops. Fishing is integral to the economy and culture of Scotland’s Shetland Islands. In 1999, the value of the Shetland Islands’ fishing industry accounted for approximately one-fifth of the Shetland Islands’ economy and provided over 2,500 jobs. As part of the United Kingdom, Scotland is party to the Common Fisheries Policy of the European Union. The United Kingdom receives catch quotas for each species from the European Union and then allocates portions of these quotas to groups of fishermen known as producer organizations, such as the Shetland Fish Producers Organization. The United Kingdom manages quotas under a fixed quota allocation, an individual fishing quota that, in practice, allows quota trades. In the 1990s, because of concerns about high quota prices and foreigners holding local quota, the Shetland Islands’ fishing industry developed the Shetland Community Fish Quota scheme to protect its fishermen. The Shetland Fish Producers Organization created and manages two pools of quota for Shetland Islands fishermen, one for member fishermen and one for new entrants. Using oil settlement monies, the local government purchased quota for the community fish quota pool. This quota pool is available to those who have no quota as well as those who need additional quota to participate in the fishery. In 2002, 13 vessels used the pool, more than half receiving their entire quota from the pool. The producers organization charges a fee based on gross earnings rather than a fixed- term lease. Thus, new entrants are charged only for fish landed and are not penalized for leasing quota they cannot fish. The fee is based on the ratio of quota held to quota borrowed. Table 2 shows how this fee is charged. The surfclam/ocean quahog fishery is a small, industrialized fishery primarily located in the waters from Maine to Virginia, with commercial concentrations found off the Mid-Atlantic states. The ocean quahog fishery arose as a substitute for surfclams when the surfclam fishery declined in the mid 1970s. While ocean quahogs are found further off shore than surfclams, the same vessels are largely used in each fishery. The surfclam fishery developed after World War II and was being overfished by the mid 1970s. Disease and industry overfishing led the Mid-Atlantic Fishery Management Council to develop a plan to manage the fishery. The surfclam/ocean quahog fishery consists of small, independent fishermen and vertically integrated companies. Individual fishing quotas were established for the surfclam/ocean quahog fishery in 1990; it was the first IFQ program in the United States. The program was not designed nor does it have specific objectives aimed at protecting fishing communities or facilitating new entry; rather, it was designed to help stabilize the fishery and reduce excessive investment in fishing capacity. The program included no specific and measurable limits on how much quota an individual could accumulate. However, allowing quota to be sold and leased provides the opportunity for entry into the fishery. The Pacific halibut and sablefish fisheries are located off the coast of Alaska. The fishing fleets are primarily owner-operated vessels of various lengths that use hook and line or pot (fish trap) gear. Some vessels catch both halibut and sablefish, and, given the location of both species, they are often caught as incidental catch of one another. Overcapacity of fishing effort led to fishing seasons that lasted less than 3 days and a race to catch fish. The Alaskan halibut and sablefish IFQ program was implemented in 1995, shortly before Congress placed a moratorium on new IFQ programs. The program was designed, in part, to help improve safety for fishermen, enhance efficiency, and reduce excessive investment in fishing capacity. The IFQ program includes the following community protection or new entry provisions: Community quota. When the program was implemented, the council set aside quota for a community development program to develop fishing and fishing-related activities in villages in western Alaska. In 2002, the council amended the IFQ program to allow certain Gulf of Alaska coastal communities to buy Alaskan halibut and sablefish quota. Accumulation limits. The North Pacific Council adopted accumulation limits ranging from 0.5 percent to 1.5 percent, depending on the fishing area, to help protect the fisheries’ owner-operator fleet, which operates out of smaller communities. Vessel categories. The quota for each person eligible to receive quota was permanently assigned to one of four vessel categories based on vessel type and length. Quota blocks. The council permanently placed small amounts of quota in blocks, in part, to help make quota available and affordable for entry-level fishermen. Large amounts of quota remained unblocked. Blocks can only be bought or transferred in their entirety. An individual can hold two quota blocks; an individual who holds any amount of unblocked quota can only hold one quota block. Crew consideration. Eligibility to obtain most quota by transfer is limited to those who have 150 days of experience participating in any U.S. fishery. A fishery cooperative is a group of fishermen who agree to work together for their mutual benefit. Two fishery cooperatives emerged as an alternative to IFQ programs in U.S. federal waters: (1) the Whiting Conservation Cooperative, established in 1997 and (2) the Bering Sea Pollock Conservation Cooperative, established in 1998. These cooperatives are voluntary contractual agreements among fishermen to apportion shares of the catch among themselves. Fishery cooperatives operate under the Fishermen’s Collective Marketing Act of 1934 (15 U.S.C. § 521), which provides an antitrust exemption to fishermen, allowing them to jointly harvest, market, and price their product. The Pacific whiting fishery, located off the coasts of Washington, Oregon, and California, is under the jurisdiction of the Pacific Fishery Management Council. Whiting is harvested using mid-water trawl nets (cone-shaped nets towed behind a vessel) and primarily processed into surimi. The council has divided the Pacific whiting total allowable catch (TAC) among three sectors—vessels that deliver to onshore processors, vessels that deliver to processing vessels, and vessels that catch and also process. In the 1990s, the fishery was overcapitalized and fishing companies were engaged in a race for fish. In 1997, four companies operating the 10 catcher-processor vessels in the fishery voluntarily formed the Whiting Conservation Cooperative, which is organized as a nonprofit corporation under the laws of the state of Washington. The overall purposes of the cooperative are to (1) promote the intelligent and orderly harvest of whiting, (2) reduce waste and improve resource utilization, and (3) reduce incidental catch of species other than whiting. The specific goals are to (1) eliminate the race for fish and increase efficiency, (2) improve the efficiency of the harvest by using an independent monitoring service and sharing catch and incidental catch information, and (3) conduct and fund research for resource conservation. The cooperative is not involved in matters relating to pricing or marketing of whiting products. The cooperative’s contract allocates the Pacific whiting TAC for the catcher-processor sector among the cooperative’s members, who agree to limit their individual harvests to a specific percentage of the TAC. Once individual allocations are made, the contract allows for quota transfers among member companies. To monitor the catch, the contract requires the members to maintain full-time federal observers on their vessels. Member companies bear the cost of observer coverage. The contract also requires members to report catches to a private centralized monitoring service. To ensure compliance, the contract contains substantial financial penalties for members exceeding their share of the quota. The pollock fishery off the coast of Alaska is the largest U.S. fishery by volume. The fishery is under the jurisdiction of the North Pacific Fishery Management Council, which sets the TAC each year. About 5 percent of the TAC is held in reserve to allow for the incidental taking of pollock by other fisheries, 10 percent is allocated to Alaska’s community development quota program, and the remainder, called the directed fishing allowance, is allocated to the pollock fishery. Like whiting, pollock is harvested using mid-water trawl nets. Pollock swim in large, tightly packed schools and do not co-mingle with other fish species. Pollock are primarily processed into surimi and fillets. In the 1990s, the Bering Sea pollock fishery was severely overcapitalized, producing a race for fish. As a result, the fishing season was reduced from 12 months in 1990 to 3 months in 1998. The fishery is composed of three sectors—inshore, offshore catcher- processor, and offshore mothership (large processing vessel). The American Fisheries Act statutorily allocated the pollock fishery TAC among these three sectors and specified the eligible participants in each sector. The nine companies that operated the 20 qualified catcher- processor vessels formed the Pollock Conservation Cooperative in December 1998. The purpose of the cooperative was to end the race for fish. Under the cooperative’s agreement, members limit their individual catches to a specific percentage of the total allowable catch allocated to their sector. Once the catch is allocated, members can freely transfer their quota to other members. The American Fisheries Act requires each catcher-processor vessel to have two federal observers on board at all times. Member companies bear the cost of observer coverage on their vessels. A private sector firm also tracks daily catch and incidental catch data to ensure that each member stays within its agreed upon harvest limits. To ensure compliance, the contract contains substantial financial penalties for members exceeding their share of the quota. The cooperative is not involved in matters relating to pricing or marketing of pollock products. In addition to operating under the terms of the cooperative’s contract, members of the cooperative must conduct fishing activities in compliance with certain NMFS and council requirements. Specifically, NMFS is responsible for closing the fishery when the sectoral allocation is reached. NMFS and the council set the season, impose restrictions against fishing in certain areas and at certain times, and set incidental catch limits for other species. The following are GAO’s comments on NOAA’s written comments provided by the Under Secretary of Commerce for Oceans and Atmosphere’s letter dated February 6, 2004. 1. The report provided examples of National Standards relating to issues discussed in the report (overfishing, equity, efficiency, community protection, and new entry). We did not include National Standards relating to cost minimization, by-catch, and safety-at-sea, because we did not discuss these issues in the report. 2. We revised the text to make it clear that we were providing examples of commercial fisheries where new IFQ programs were being considered. 3. We revised the text to reflect that the halibut season was increased to 8 months. 4. We deleted the footnote relating to the uniqueness of Alaska, which is regulated by the North Pacific Council, from states covered by the other fishery councils, which regulate fisheries in multiple states. In addition to those named above, Doreen S. Feldman, John S. Kalmar, Jr., Susan J. Malone, Mark R. Metcalfe, Carol Herrnstadt Shulman, and Tama R. Weinberg made key contributions to this report.
To assist in deliberations on individual fishing quota (IFQ) programs, GAO determined (1) the methods available for protecting the economic viability of fishing communities and facilitating new entry into IFQ fisheries, (2) the key issues faced by fishery managers in protecting communities and facilitating new entry, and (3) the comparative advantages and disadvantages of the IFQ system and the fishery cooperative approach. Several methods are available for protecting the economic viability of fishing communities and facilitating new entry into IFQ fisheries. The easiest and most direct way to help protect communities under an IFQ program is to allow the communities themselves to hold quota. Fishery managers can also help communities by adopting rules aimed at protecting certain groups of fishery participants. Methods for facilitating new entry principally fall into three categories: (1) adopting transfer rules on selling or leasing quota that help make quota more available and affordable to new entrants; (2) setting aside quota for new entrants; and (3) providing economic assistance, such as loans and subsidies, to new entrants. In considering methods to protect communities and facilitate new entry into IFQ fisheries, fishery managers face issues of efficiency and fairness, as well as design and implementation. Community protection and new entry methods are designed to achieve social objectives, but realizing these objectives may undermine economic efficiency and raise questions of equity. For example, allowing communities to hold quota may result in a loss of economic efficiency because communities may not have the knowledge and skills to manage the quota effectively. Similarly, rules to protect communities or facilitate new entry may appear to favor one group of fishermen over another. Furthermore, community protection and new entry methods raise a number of design and implementation challenges. For example, according to fishery experts, defining a community can be challenging because communities can be defined in geographic and nongeographic ways. Similarly, loans or grants may help provide new entrants with the capital needed to purchase quota, but they may also contribute to further quota price increases. Given the various issues that fishery managers face in developing community protection and new entry methods, it is unlikely that any single method can protect every type of fishing community or facilitate new entry into every IFQ fishery. Deciding which method(s) to use is made more challenging because fishery managers have not conducted comprehensive evaluations of how IFQ programs protect communities or facilitate new entry. In comparing the key features of IFQ programs and U.S. fishery cooperatives, we found that each approach has advantages and disadvantages in terms of regulatory and management framework, number of participants, quota allocation and transfer, and monitoring and enforcement. Specifically, in terms of regulatory and management framework, IFQ programs have greater stability than cooperatives because they are established by federal regulations, while cooperatives are voluntary contractual arrangements. In terms of quota allocation and transfer, IFQ programs are open in that they allow the transfer of quota to new entrants, whereas cooperatives are exclusive by contractual arrangement among members. In terms of monitoring and enforcement, IFQ programs are viewed as being more difficult to administer, because NMFS must monitor individual participants, while cooperatives are viewed to be simpler for NMFS to administer, because NMFS monitors only one entity--the cooperative. For some fisheries, a combined approach may be beneficial. For example, a cooperative of IFQ quota holders can combine an IFQ program's stability with a cooperative's collaboration to help manage the fishery.
The Federal Information Security Management Act (FISMA) specifies requirements for protecting federal systems and data. Enacted into law on December 17, 2002, as title III of the E-Government Act of 2002, FISMA requires every federal agency, including agencies with national security systems, to develop, document, and implement an agencywide information security program to secure the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Specifically, this program is to include periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; risk-based policies and procedures that cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices that include testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security incidents; plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. FISMA also assigns specific information security responsibilities to the Office of Management and Budget (OMB), NIST, agency heads, and agency chief information officers (CIO). Generally, OMB is responsible for developing policies and guidance and overseeing agency compliance with FISMA, NIST is responsible for developing technical standards, and agency heads and CIOs are responsible for ensuring that each agency implements the information security program and other requirements of FISMA. These responsibilities do not, however, apply equally to all agency information systems. FISMA differs in its treatment of national security and non-national security systems. While FISMA requires each federal agency to manage its information security risks through its agencywide information security program, the law recognizes a long-standing division between requirements for national security and non-national security systems that limits civilian management and oversight of information systems supporting military and intelligence activities. FISMA recognizes the division between national security systems and non- national security systems in two ways. First, to ensure compliance with applicable authorities, the law requires agencies using national security systems to implement information security policies and practices as required by standards and guidelines for national security systems in addition to the requirements of FISMA. Second, the responsibilities assigned by FISMA to OMB and NIST are curtailed. OMB’s responsibilities are reduced with regard to national security systems to oversight and reporting to Congress on agency compliance with FISMA. OMB’s annual review and approval or disapproval of agency information security programs, for example, does not include national security systems. Similarly, according to FISMA, NIST-developed standards, which are mandatory for non-national security systems, do not apply to national security systems. FISMA limits NIST to developing, in conjunction with DOD and the National Security Agency (NSA), guidelines for agencies on identifying an information system as a national security system, and for ensuring that NIST standards and guidelines are complementary with standards and guidelines developed for national security systems. FISMA also requires NIST to consult with other agencies to ensure use of appropriate information security policies, procedures, and techniques in order to improve information security and avoid unnecessary and costly duplication of effort. In light of this division between national security and non-national security systems, NIST is responsible for developing standards and guidance for non-national security information systems. For example, NIST issues mandatory Federal Information Processing Standards (FIPS) and special publications that provide guidance for information systems security for non-national security systems in federal agencies. For national security systems, National Security Directive 42 established CNSS, an organization chaired by the Department of Defense, to, among other things, issue policy directives and instructions that provide mandatory information security requirements for national security systems. In addition, the defense and intelligence communities develop implementing instructions and may add additional requirements where needed. FISMA provides a further exception to compliance with NIST standards. It permits an agency to use more stringent information security standards if it certifies that its standards are at least as stringent as the NIST standards and are otherwise consistent with policies and guidelines issued under FISMA. It is on the basis of this authority that the Department of Defense establishes information security standards for all of its systems (national security and non-national security systems) that are more stringent than the standards required for protecting non-national security systems under FISMA. For example, the DOD directive establishing the Department of Defense Information Assurance Certification and Accreditation Process (DIACAP) for authorizing the operation of DOD information systems requires annual certification that the DIACAP process is current and more stringent than NIST standards under FISMA. To help implement the provisions of FISMA for non-national security systems, NIST has developed a risk management framework for agencies to follow in developing information security programs. The framework is specified in NIST Special Publication (SP) 800-37, revision 1, Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach, which provides agencies with guidance for applying the risk management framework to federal information systems. The framework in SP 800-37 consists of security categorization, security control selection and implementation, security control assessment, information system authorization, and security control monitoring. It also provides a process that integrates information security and risk management activities into the system development life cycle. Figure 1 provides an illustration of the framework and notes relevant security guidance for each part of the framework. tem engineering prctice; pply (i.e., control implemented correctly, operting as intended, meeting ecrity reqirement for informtion tem) Other key NIST publications related to the risk management framework include the following: Federal Information Processing Standard (FIPS) 199, Standards for Security Categorization of Federal Information and Information Systems. Provides agencies with criteria to identify and categorize their information systems based on providing appropriate levels of information security according to a range of risk levels. NIST SP 800-60, revision 1, Guide for Mapping Types of Information and Information Systems to Security Categories. Provides guidance for implementing FIPS 199. FIPS 200, Minimum Security Requirements for Federal Information and Information Systems. Provides minimum information security requirements for protecting the confidentiality, integrity, and availability of federal information systems. NIST SP 800-53 revision 3, Recommended Security Controls for Federal Information Systems and Organizations. Provides guidelines for selecting and specifying security controls for information systems. NIST SP 800-70, revision 1, National Checklist Program for IT Products- Guidelines for Checklist Users and Developers. Provides guidance for using the National Checklist Repository to select a security configuration checklist, which may include items such as security controls used in FISMA system assessments. NIST SP 800-53A, revision 1, Guide for Assessing the Security Controls in Federal Information Systems. Provides agencies with guidance for building security assessment plans and procedures for assessing the effectiveness of security controls employed in information systems. In applying the provisions of FIPS 200, federal civilian agencies with non- national security systems are to first categorize their information and systems as required by FIPS 199, and then should select an appropriate set of security controls from NIST SP 800-53 to satisfy their minimum security requirements. This helps to ensure that appropriate security requirements and security controls are applied to all non-national security systems. Next, controls are implemented and information systems are authorized using NIST SP 800-70. Finally, agencies assess, test, and monitor the effectiveness of the information security controls using the guidance in NIST SP 800-53A. Many other FIPS and NIST special publications provide guidance for the implementation of FISMA requirements for non-national security systems. For national security systems, organizations responsible for developing policies, directives, and guidance include CNSS, DOD, and the intelligence community. The processes and criteria established by this guidance are often similar to those required by NIST guidance for non-national security systems. For example, security guidance for certification and accreditation requires risk assessments, verification of security requirements in a security plan or other document, testing of security controls, and formal authorization by an authorizing official. Roles of these agencies and key security guidance that they have issued are described below. CNSS provides a forum for the discussion of policy issues, sets national policy, and provides direction, operational procedures, and guidance for the security of national security systems. The Department of Defense chairs the committee under the authorities established by National Security Directive 42, issued in July 1990. This directive designates the Secretary of Defense and the Director of the National Security Agency as the Executive Agent and National Manager for national security systems, respectively. The committee has voting representatives from 21 departments and agencies. In addition, nonvoting observers such as NIST participate in meetings, provide comments and suggestions, and participate in subcommittee and working group activities. The committee organizes its activities by developing an annual program of work and plan of action and milestones. NSA provides logistical and administrative support for the committee, including a Secretariat manager who organizes the day-to-day activities of the committee. Since its inception, the committee has issued numerous policies, directives, and instructions that are binding upon all federal departments and agencies for national security systems. Key publications include the Information Assurance Risk Management Policy for National Security Systems, National Policy on Certification and Accreditation of National Security Telecommunications and Information Systems, National Information Assurance Certification and Accreditation Process, and a National Information Assurance Glossary. To defend DOD information systems and computer networks from unauthorized or malicious activity, the department established an Information Assurance Framework in its 8500 series of guidance. This framework allows DOD to ensure the security of its information systems by providing standards and support to its component information assurance programs. DOD uses this framework for all of its IT systems. DOD directive 8500.01 and implementing instruction 8500.2, which documents information security controls, are the primary policy documents that describe this framework. In addition, the Department of Defense Information Assurance Certification and Accreditation Process, published in November 2007, is documented in DOD 8510.01 and the online DIACAP knowledge service. Also, the establishment of an information security program is described in DOD regulation 5200.01-R, dated January 1997. The intelligence community is a federation of executive branch agencies and organizations that work separately and together to conduct intelligence activities necessary for the conduct of foreign relations and the protection of the national security of the United States. Member organizations include intelligence agencies, military intelligence, and civilian intelligence and analysis offices within federal executive departments. The community is led by the Director of National Intelligence, who oversees and directs the implementation of the National Intelligence Program. Historically, the intelligence community has had separate instructions related to information system security. For example, Director of Central Intelligence Directive (DCID) 6/3, Protecting Sensitive Compartmented Information within Information Systems, and its implementation manual provided policy and procedures for the security and protection of systems that create, process, store, and transmit intelligence information, and defined and mandated the use of a risk management process and a certification and accreditation process. Prior to efforts to harmonize information security guidance, federal organizations had developed separate, and sometimes disparate, guidance for information security. For example, the National Security Agency used the National Information Systems Certification and Accreditation Process, the intelligence community used DCID 6/3, and DOD used the Department of Defense Information Technology Security Certification and Accreditation Process, which later became the DIACAP. According to the Federal CIO Council’s strategic plan and federal officials in DOD and the intelligence community, these processes had some elements in common; however, the variances in guidance were sufficient to cause several unintended and undesirable consequences for the federal community. For example, both DOD and NIST had catalogs of information security controls that covered similar areas but had different formats and structures. As a result, according to the CIO Council, organizations responsible for providing oversight of federal information systems such as members of the CIO Council and CNSS could not easily assess the security of federal information systems. In addition, reciprocity—the mutual agreement among participating enterprises to accept each other’s security assessments—was hampered because of the apparent differences in interpreting risk levels. Because agencies were not confident in their understanding of other agencies’ certification and accreditation results, they sometimes felt it necessary to recertify and reaccredit information systems, expending resources, including time and money, which may not have been necessary. A task force consisting of representatives from civilian, defense, and intelligence agencies has made progress in establishing a unified information security framework for national security and non-national security systems. Specifically, NIST has published three initial documents developed by a task force working group to harmonize information security standards for national security and non-national security systems, and is scheduled to publish two more by early 2011. While much has been accomplished, differences remain between the guidance for the two types of systems, and significant work remains to implement the harmonized guidance on national security systems, such as developing supporting agency-specific guidance and establishing specific time frames and performance measures for implementation. Further, while the task force has implemented elements of key practices for interagency coordination that GAO has identified, much of this implementation is not documented. The lack of fully implemented practices, such as those that assign responsibilities and measure progress, could limit the task force’s continued progress as personnel change and resources are allocated among other agency activities. According to NIST and CNSS officials, a Joint Task Force Transformation Initiative Interagency Working Group was formed in April 2009 with representatives from NIST, DOD, and ODNI to produce a unified information security framework for the federal government. Instead of having parallel publications for national security systems and non-national security systems for risk management and systems security, the intent, according to members of the joint task force, is to have common publications to the maximum extent possible. According to officials involved in the task force, harmonized security guidance is expected to result in less duplication of effort, lower maintenance costs, and more effective implementation of controls across multiple interconnected systems. In addition, the harmonized guidance should make it simpler and more cost-effective for vendors and contractors to supply security products and services to the federal government. The task force arose out of prior efforts to harmonize security guidance among national security systems. In 2006, the ODNI and DOD CIOs began an initiative to harmonize the two organizations’ certification and accreditation guidance and processes for IT systems. For example, in July 2006, DOD and the intelligence community established a Unified Cross Domain Management Office to address duplication and uncoordinated security activities and improve the security posture of the agencies’ highest-risk security devices. In January 2007, the DOD and ODNI CIOs published seven certification and accreditation transformation goals that included development of common security controls. According to DOD, by July 2008, DOD and the intelligence community were working on six documents that mirrored similar NIST risk management and information security publications. In August 2008, the CIOs signed an agreement adopting common guidelines to streamline and build reciprocity into the certification and accreditation process. As this effort progressed, the agencies involved determined that it would benefit from closer engagement with NIST and the development of common security guidance. NIST had been informally involved in the harmonization effort for several years, but, according to CNSS, DOD, and ODNI, during the CNSS annual conference in the spring of 2009, the CNSS community decided to more actively engage NIST and agree to use NIST documents as the basis for information security controls and risk management. The committee also agreed to complete policies and instructions to support use of the NIST publications. Following the conference, a memo from the Acting CIO for the intelligence community stated that the intelligence community intended to follow CNSS guidance that pointed to related NIST publications. NIST currently leads the working group and the task force publication development process. Working group members are selected for each publication from participating agencies and support contractors to provide subject matter expertise and administrative support. In addition, the task force is guided by a senior leadership team from NIST, CNSS, DOD, and ODNI that reviews and approves the harmonized publications. As illustrated in figure 2, key areas targeted for the common guidance include risk management, security categorization, security controls, security assessment procedures, and the security authorization process contained in the NIST risk management framework. NIST develops standards and guidance for non-national security systems, including most systems in civilian agencies. CNSS provides policy, directives, and instructions binding upon all U.S. government departments and agencies for national security systems, including systems in the intelligence community and DOD (e.g., classified systems). Since NIST does not have authority over national security systems, CNSS issuances authorize the use of the harmonized NIST guidance developed by the joint task force. As necessary, CNSS also develops additional information security requirements to accommodate the unique nature of national security systems. Finally, individual agencies may create their own specific implementing guidance. The joint task force has published three of five planned publications containing harmonized information security guidance and is actively developing the final two publications. These include a new publication as well as revisions to existing NIST guidance, as summarized in table 1. In addition, the task force is considering collaboration on two additional publications. As of June 2010, the three publications developed by the joint task force and released by NIST are the following: NIST SP 800-53, revision 3, Recommended Security Controls for Federal Information Systems and Organizations, was published in August 2009. It contains the catalog of security controls and technical guidelines that federal agencies will use to protect federal information and information systems, and is an integral part of the unified information security framework for the entire federal government. The security controls within revision 3 provide updated security controls developed by the joint task force members that included NIST, CNSS, DOD, and ODNI with specific information from databases of known cyber attacks and threat information. According to the task force leader and the CNSS manager, new controls and enhancements were added as a result of the harmonization effort. For example, control AC-4, related to Information Flow Enforcement, had several enhancements added because of input from the national security systems community. NIST SP 800-37, revision 1, Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach, was released in February 2010. This publication replaces the traditional certification and accreditation process with the six-step risk management framework, including a process of assessment and authorization. According to the publication, the revised process emphasizes building information security capabilities into federal information systems through the application of security controls while implementing an ongoing monitoring process. It also provides information to senior leaders to facilitate better decisions regarding the acceptance of risk arising from the operation and use of information systems. According to the task force leader and the CNSS manager, the publication contains few direct changes as a result of the harmonization effort. Rather, task force representatives determined that the existing NIST risk management framework contained the same concepts and content as existing national security-related guidance, such as the DIACAP. NIST SP 800-53A, revision 1, Guide for Assessing the Security Controls in Federal Information Systems and Organizations, was published in June 2010. The updated security assessment guideline is intended to incorporate leading practices in information security from DOD, the intelligence community, and civil agencies and includes security control assessment procedures for both national security and non-national security systems. The guidelines for developing security assessment plans are intended to support a wide variety of assessment activities in all phases of the system development life cycle, including development, implementation, and operation. According to the task force leader and the CNSS manager, while there were few direct changes to the content of SP 800-53A as a result of the harmonization effort, task force members are collaborating on revising the assessment cases, which provide additional instruction on techniques for testing specific controls. According to the leader, this effort is to be completed by the end of 2010. Because CNSS, not NIST, has the authority to issue binding guidance for national security systems, CNSS has issued supplemental guidance for implementing NIST SP 800-53: CNSS Instruction 1253 (CNSSI-1253), Security Categorization and Control Selection for National Security Systems, which was published in October 2009. This instruction states that the Director of National Intelligence and the Secretary of Defense have directed that the processes described in NIST SP 800-53, revision 3 (as amended by the instruction), and the NIST security and programmatic controls contained in 800-53 apply to national security systems. Using the controls in 800-53, this instruction provides categorization and corresponding baseline sets of controls for national security systems. CNSS also recently published a revised common glossary of information security terms in support of the goal of adopting a common lexicon for the national security and non-national security communities. This revised glossary harmonizes terminology used by DOD, the intelligence community, and civil agencies (which use a NIST-developed glossary) to enable all three to use the same terminology (and move toward shared documentation and processes). According to the CNSS Secretariat manager, in December 2010 CNSS plans to revise an existing policy, CNSSP 6, to generally direct the use of NIST publications, including SP 800-37 and SP 800-53A, as common guidance and will include related CNSS instructions (if any) on how to implement the NIST guidance for national security systems. This will coincide closely with the publication of NIST SP 800-39 and SP 800-30, revision 1. The CNSS manager stated that once common guidance developed jointly with NIST is finalized, CNSS needs to determine whether it will need supplemental instructions because of the uniqueness of national security systems (e.g., their special operating environments or the classified information they contain). However, CNSS officials said that the committee intends to keep this unique guidance to a minimum and use the common security guidance to the maximum extent possible. The joint task force’s development schedule lists two additional joint task force publications: NIST SP 800-39, Enterprise-Wide Risk Management: Organization, Mission, and Information Systems View, planned for publication in January 2011, is to provide an approach for managing that portion of risk resulting from the incorporation of information systems into the mission and business processes of an organization. NIST SP 800-30, revision 1, Guide for Conducting Risk Assessments, planned for publication in February 2011, is a revision of an existing NIST publication that will be refocused to address risk assessments as part of the risk management framework. In addition to the two planned publications, the joint task force leader and the CNSS Secretariat manager stated that two other publications are under consideration for collaboration: Guide for Information System Security Engineering, under consideration for publication in September 2011, and Guide for Software Application Security, under consideration for publication in November 2011. The estimated completion dates for these future publications are later than originally planned. For example, as of January 2010, SP 800-39 and SP 800- 30, revision 1, were to have been completed in August 2010, and the information system security engineering guide was to be completed in October 2010. According to the task force leader, the delays are due to additional work and coordination activities that needed to be completed, the breadth and depth of comments in the review process, and challenges in coordination with other task force members. Task force members acknowledge that there are additional areas of IT security guidance where it may be possible to collaborate, but they have not yet documented plans for future efforts. The CNSS manager stated that the committee intends to update its existing plan of action and milestones in fall 2010, but this has not yet been completed. Until the task force defines topics and deadlines for future efforts, opportunities for additional collaboration will likely be constrained. Despite the efforts to harmonize information security guidance, many differences remain. These include differences in system categorization, selection of security controls, and use of program management controls. System categorization. Different methodologies are used to categorize the impact level of the information contained in non-national security systems and national security systems. For non-national security systems, SP 800- 53 applies the concept of a high-water mark for categorizing the impact level of the system, as defined in FIPS 199. This means that the system is categorized according to the worst-case potential impact of a loss of confidentiality, integrity, or availability of information or an information system. For example, if loss of confidentiality was deemed to be high impact, but loss of integrity and availability were deemed to be moderate impact, the system would be considered a high-impact system. As a result, SP 800-53 contains three recommended baselines (starting points) for control selection—low, moderate, and high. By contrast, while national security systems will use the controls in SP 800-53, the impact level will be determined using CNSSI-1253, not FIPS 199. CNSSI-1253 uses a more granular structure in which the potential impact levels of loss of confidentiality, integrity, and availability are individually used to select categorizations. As a result, while FIPS 199 has three impact levels (low, moderate, and high), CNSSI-1253 has 27 (all possible combinations of low, moderate, and high for confidentiality, integrity, and availability). According to an official at NIST, use of the high-water mark is easier for civilian agencies to implement for non-national security systems, and provides a more conservative approach by employing stronger controls by default. According to CNSS, retaining the more granular impact levels reduces the need for subsequent tailoring of controls. Officials involved in the harmonization effort stated that while they may attempt to reconcile the approaches in the future, there are no current plans to do so. Security control selection. In our analysis of NIST and CNSS security control baselines for non-national security systems and national security systems, we determined that the new national security system baselines based on SP 800-53 incorporated almost all of the controls found in comparable non-national security baselines, as well as additional security controls and enhancements. For example, a high-impact system under the non-national security system baseline includes 328 controls and subcontrols. The equivalent baseline for a national security system includes 397 controls and subcontrols, out of which 326 were shared between the two baselines. Both CNSS and NIST officials stated that their baselines represent the starting point for determining which controls are appropriate for an individual system and that controls and enhancements may be removed or added as needed in accordance with established guidance. CNSS officials stated that national security systems provide unique capabilities (e.g., intelligence, cryptographic, or command and control), operate in diverse environments, and are subject to advanced cyber threats. As a result, national security systems may require more protection and thus more security controls than non-national security systems. Also, according to CNSS officials, while security controls for non-national security systems are often aimed at a broad IT environment, guidance for national security systems is developed with added specificity and a focus on vulnerabilities, threats, and countermeasures to protect classified information. However, NIST officials noted some non-national security systems may require levels of protection that are equal to the levels for national security systems in order to counter cyber attacks. For example, certain high- impact non-national security systems may be supporting applications that are part of critical infrastructure. Therefore, the mission criticality of some non-national security systems may require the same control techniques used by national security systems to counter cyber attacks. Program management controls. NIST SP 800-53, revision 3, identifies 11 program management controls that agencies are required to implement organizationwide to support all security control baselines for non-national security systems. CNSSI-1253 states that these controls are optional. A CNSS official stated that the implementation of program management controls is optional to give the CNSS community flexibility to implement them in a way that best fits their information security program organizational and operational models. DOD said it plans to address these controls in upcoming revisions to its information security guidance. NIST and CNSS officials acknowledged that differences still exist in the harmonized guidance, and stated that the harmonization process will take time, and not all differences will be resolved during the initial harmonization effort. They stated that they have chosen to focus on issues on which they can readily achieve consensus and, if appropriate, plan to resolve remaining issues in a future revision. While much of the harmonized guidance is already in use for non-national security systems, significant work remains to implement the new guidance on national security systems. For non-national security systems, OMB requires that NIST guidance be implemented within 1 year of its publication. The civilian community has been using previous versions of SP 800-53 since February 2005; thus many of the controls have already been available for use for non-national security systems. However, while plans for implementing the harmonized information system guidance within DOD and the intelligence community have begun, full implementation may take years to complete. While DOD officials have stated that the concepts and content in the harmonized security guidance are similar to those in existing DOD directives and instructions, the implementation process will require substantial time and effort. Officials said that transitioning to the new security controls will require in-depth planning and additional resources, implementation will be incremental, and it will take a number of years to complete. For example, systems that are currently in development may be transitioned to the harmonized guidance, while systems that are already deployed may be transitioned only if the system undergoes a major change before its next scheduled security evaluation or review. In order for DOD to transition to the new harmonized guidance, it plans to first revise its existing 8500 series of guidance. This process includes upcoming revisions to the information security policy documented in its directive 8500.01 and instruction 8500.2, the certification and accreditation process contained in DOD 8510.01, as well as various additional instructions and guidance. The first major step is to release the revised DOD 8500.01 and 8500.2, based on the harmonized joint task force guidance. As seen in table 2, the estimated release date for these revisions is December 2010. After this occurs, DOD plans to develop additional implementation and assessment guidance, technical instructions, and other information. The release dates for these additional items have not yet been established because their development or revision is dependent on the final publication of revisions to the 8500 series guidance. Once DOD issues guidance for implementing the joint task force’s harmonized guidance, officials said that it will take several more years to incorporate the security controls into the systems’ security plans. Specifically, the security plans for legacy systems will not be updated until those systems are due for recertification and reaccreditation, which could take place up to 3 years after updated DOD guidance has been released. Furthermore, DOD has not yet established milestones and performance measures for implementing the new guidance pending its issuance. Until the department develops, issues, and implements its revised policy, including guidance on implementation time frames, potential benefits from implementing the harmonized guidance, such as reduced duplication of effort, will not be realized. While the intelligence community has taken steps to transition to the harmonized guidance, it faces challenges in doing so, such as developing detailed transition plans with milestones and resources for implementation. The intelligence community has established broad transition guidance in the form of directives and standards that direct the use of CNSS policy and guidance, which in turn point to the harmonized NIST guidance. The community has also developed a high-level transition plan, based on planned publication dates of harmonized guidance. In addition, guidance issued in May 2010 also states that each organization within the intelligence community shall establish its own internal transition plan and timeline based on organization-specific factors. However, officials stated that the effort required to implement the new controls is significant in terms of the number of systems and their criticality and that implementation must be carried out in a careful, measured way. Furthermore, SP 800-53A, the publication used to assess the controls in SP 800-53, was not published until June 2010. According to CNSS and intelligence community officials, SP 800-53A needed to be issued before these agencies could complete their implementation instructions for SP 800-53 controls. Therefore, CNSS has not established policies with specific time frames for implementation of these controls. The manager of CNSS said that the transition will be incremental, and will vary based on the complexity of the systems involved. For example, difficult-to-service embedded systems that have already been authorized, such as satellite systems, may use the current set of controls until the systems are removed from operation. An ODNI review of intelligence community implementation plans identified several potential challenges with implementing harmonized guidance. According to ODNI’s overall transition plan issued in November 2009, a review of intelligence agency transition plans raised concerns, including the following: Most agencies want policies and standards to be in place before implementing the transition. The transition is likely to take 3 to 5 years after implementation guidance is provided. A phased approach is desirable and needed, but performance measures and milestones have not been defined. Resources, and the appropriate expertise, will need to be planned and available to implement the harmonized guidance. The NSA official responsible for approving the operation of information systems confirmed these concerns. For example, she stated that a phased implementation approach is necessary because the agency would not be able to reaccredit and recertify all of its systems at once. Additionally, she stated that it is difficult to establish milestones and performance measures because the security of a system cannot easily be quantified. However, federal guidance and our prior work have emphasized the importance of tools such as a schedule and means to track progress to the success of IT efforts. Until supporting implementation plans with milestones, performance measures, and identified resources are developed and approved to implement the harmonized guidance, the benefits realized by the intelligence community from the harmonization effort will likely be constrained. In prior work, we identified key practices that can help federal agencies to enhance and sustain collaboration efforts, such as the joint task force effort to harmonize information security guidance. The practices include the following: Defining and articulating a common outcome. The compelling rationale for agencies to collaborate can be imposed externally through legislation or other directives or can come from the agencies’ own perceptions of the benefits they can obtain from working together. Establishing mutually reinforcing or joint strategies to achieve the outcome. Agency strategies that work in concert with those of their partners help in aligning the partner agencies’ activities, core processes, and resources to accomplish the common outcome. Identifying and addressing needs by leveraging resources. Collaborating agencies bring different levels of resources and capacities to the effort. By assessing their relative strengths and limitations, collaborating agencies can look for opportunities to address resource needs by leveraging each other’s resources, thus obtaining additional benefits that would not be available if they were working separately. Agreeing upon agency roles and responsibilities. Collaborating agencies should work together to define and agree on their respective roles and responsibilities, including how the collaborative effort will be led. In doing so, agencies can clarify who will do what, organize their joint and individual efforts, and facilitate decision making. Establishing compatible policies, procedures, and other means to operate across agency boundaries. To facilitate collaboration, agencies need to address the compatibility of artifacts such as standards and policies that will be used in the collaborative effort. Developing mechanisms to monitor, evaluate, and report the results of collaborative efforts. Federal agencies engaged in collaborative efforts need to create the means to monitor and evaluate their efforts to enable them to identify areas for improvement. Reporting on these activities can help key decision makers within the agencies, as well as clients and stakeholders, to obtain feedback for improving both policy and operational effectiveness. Reinforcing agency accountability for collaborative efforts through agency plans and reports. Federal agencies can use their strategic and annual performance plans as tools to drive collaboration with other agencies and partners and establish complementary goals and strategies for achieving results. Such plans can also reinforce accountability for the collaboration by aligning agency goals and strategies with those of the collaborative efforts. Joint task force efforts in each of these key practice areas are described in table 3. To date, the task force has been successful in its efforts while having few documented or formalized processes. Task force officials stated that they believe this structure has been very effective for harmonizing information security guidance and that the success of the effort can be measured by the results achieved to date. These include the publication of three documents, planned publication of two more, and proposed future development of two additional ones. They also stated that the distinction between national security systems and non-national security systems has existed for many years, and this was the first successful effort to harmonize guidance. Officials said that key to the project’s success has been strong management and technical leadership. Participants also stated that they felt the effort’s informality, flexibility, and agility were strengths. Participants acknowledged that fuller implementation of key practices, such as documenting identification of needs and leveraging of resources to address those needs, agreed-to roles and responsibilities, and monitoring and reporting on the results of its efforts, were missing; however, the officials stated that the task force has been a significant success and that more formal management practices could have been counterproductive and ineffective. For example, the task force leader stated that establishing these practices before the task force had demonstrated results would have been difficult. He stated that now that task force members have established positive relationships and become dependent on each other for technical knowledge, establishing more formal management practices may be easier. While the task force’s approach to managing the harmonization effort may not have hindered development to date, plans for future publications have slipped, in part because of the challenges of coordinating such a cross- agency effort. As the task force continues its efforts and approaches additional areas, fuller implementation of key practices, such as those that assign responsibilities and measure progress, would likely enhance its ability to sustain harmonization efforts as personnel change and resources are allocated among other agency activities. Efforts to harmonize policies and guidance for national security systems and non-national security systems have made progress in producing elements of a unified information security framework. The guidance published and scheduled for publication by the joint task force constitutes a key part of the foundation of the unified framework. The task force has proposed two additional publications for consideration and acknowledged the possibility of future areas for collaboration, but plans for additional activities have yet to be finalized. The harmonization effort has the potential to reduce duplication of effort and allow more effective implementation of information security controls across interconnected systems. To fully realize the benefits of the harmonized guidance, additional work remains to implement it. For example, supporting guidance and dates for implementation and performance measures have not been established for DOD and the intelligence community. Although, to date, the lack of documented management practices and processes has not significantly hindered the task force, as more difficult areas for harmonization are addressed, personnel change, and other agency priorities make demands upon resources, implementation of key practices for collaboration may help the task force further its progress. To assist the joint task force in continuing its efforts to establish harmonized guidance and policies for national security systems and non- national security systems, we are making the following five recommendations. We recommend that the Secretary of Commerce direct the Director of NIST to collaborate with CNSS to complete plans to identify future areas for harmonization efforts, and consider how implementing elements of key collaborative practices, such as documenting roles and responsibilities, needs, resources, and monitoring and reporting mechanisms, may serve to sustain and enhance the harmonization effort. We also recommend that the Secretary of Defense direct CNSS to collaborate with NIST to complete plans to identify future areas for harmonization efforts; collaborate with its member organizations, including both DOD and the intelligence community, to include milestones and performance measures in their plans to implement the harmonized CNSS policies and guidance; and collaborate with NIST to consider how implementing elements of key collaborative practices, such as documenting roles and responsibilities, needs, resources, and monitoring and reporting mechanisms, may serve to sustain and enhance the harmonization effort. In written comments on a draft of this report, the Secretary of Commerce concurred with our conclusions that the Departments of Commerce and Defense update plans for future collaboration, establish timelines for implementing revised guidance, and fully implement key practices for interagency collaboration in the harmonization effort. In a separate e-mail message, the NIST audit liaison clarified that Commerce also concurred with each recommendation. The department also provided technical comments, which we incorporated in the draft as appropriate. Comments from the Department of Commerce are reprinted in appendix II. In oral comments on a draft of this report, the Senior Policy Advisor for DOD’s Information Assurance and Strategy Directorate, within the Office of the Assistant Secretary of Defense (Networks and Information Integration)/DOD CIO, stated that DOD concurred with our recommendations. In addition, the CNSS manager stated in an e-mail message that the report is complete and that CNSS concurred without comment. We also provided a draft of this report to OMB and ODNI, to which we did not make recommendations, and they both stated that they had no comments. We are sending copies of this report to interested congressional committees, the Secretary of Commerce, and the Secretary of Defense. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6244 or at wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objective of our review was to assess the progress of federal efforts to harmonize policies and guidance for national security systems and non- national security systems. To do this, we focused on the Joint Task Force Transformation Initiative Interagency Working Group and supporting agencies within the civil, defense, and intelligence communities. Specifically, we identified actions taken and planned by the Joint Task Force Transformation Initiative Interagency Working Group to harmonize information security guidance. To do this, we reviewed program plans, schedules, and performance measures related to the harmonization efforts. We also obtained and reviewed current information technology security policies, guidance, and other documentation for national security systems and non-national security systems and then conducted interviews with officials from the National Institute of Standards and Technology (NIST), Committee on National Security Systems (CNSS), Department of Defense (DOD), Office of the Director of National Intelligence (ODNI), National Security Agency (NSA), and Office of Management and Budget (OMB) to identify differences in existing guidance and plans to resolve these differences. We also assessed efforts against criteria including prior GAO work on key practices to sustain and enhance cross-agency collaboration. We performed this assessment by reviewing documents and interviewing agency officials from NIST, CNSS, DOD, ODNI, NSA, and OMB. We identified evidence of key practices, such as documented roles and responsibilities, and mechanisms to monitor, evaluate, and report on progress, and verified our assessment with agency officials. We conducted this performance audit from February 2010 through September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the contact name above, individuals making contributions to this report included Vijay D’Souza (assistant director), Neil Doherty, Thomas J. Johnson, Lee McCracken, David Plocher, Harold Podell, and John A. Spence.
Historically, civilian and national security-related information technology (IT) systems have been governed by different information security policies and guidance. Specifically, the Office of Management and Budget and the Department of Commerce's National Institute of Standards and Technology (NIST) established policies and guidance for civilian non-national security systems, while other organizations, including the Committee on National Security Systems (CNSS), the Department of Defense (DOD), and the U.S. intelligence community, have developed policies and guidance for national security systems. GAO was asked to assess the progress of federal efforts to harmonize policies and guidance for these two types of systems. To do this, GAO reviewed program plans and schedules, analyzed policies and guidance, assessed program efforts against key practices for cross-agency collaboration, and interviewed officials responsible for this effort. Federal agencies have made progress in harmonizing information security policies and guidance for national security and non-national security systems. Representatives from civilian, defense, and intelligence agencies established a joint task force in 2009, led by NIST and including senior leadership and subject matter experts from participating agencies, to publish common guidance for information systems security for national security and non-national security systems. The harmonized guidance is to consist of NIST guidance applicable to non-national security systems and authorized by CNSS, with possible modifications, for application to national security systems. This harmonized security guidance is expected to result in less duplication of effort and more effective implementation of controls across multiple interconnected systems. The task force has developed three initial publications. These publications, among other things, provide guidance for applying a risk management framework to federal systems, identify an updated catalog of security controls and guidelines, and update the existing security assessment guidelines for federal systems. CNSS has issued an instruction to begin implementing the newly developed guidance for national security systems. Two additional joint publications are scheduled for release by early 2011, with other publications under consideration. Differences remain between guidance for national security and non-national security systems in such areas as system categorization, selection of security controls, and program management controls. NIST and CNSS officials stated that these differences may be addressed in the future but that some may remain because of the special nature of national security systems. While progress has been made in developing the harmonized guidance, additional work remains to implement it and ensure continued progress. For example, task force members have stated their intent to develop plans for future harmonization activities, but these plans have not yet been finalized. In addition, while much of the harmonized guidance incorporates controls and language previously developed for use for non-national security systems, significant work remains to implement the guidance for national security systems. DOD and the intelligence community are developing agency-specific guidance and transition plans for implementing the harmonized guidance, but, according to officials, actual implementation could take several years to complete. Officials stated that this is primarily due to both the large number and criticality of the systems that must be reauthorized under the new guidance. Further, the agencies have yet to fully establish implementation milestones and lack performance metrics for measuring progress. Finally, the harmonization effort has been managed without full implementation of key collaborative practices, such as documenting identified needs and leveraging resources to address those needs, agreed-to agency roles and responsibilities, and processes to monitor and report results. Task force members stress that their informal, flexible approach has resulted in significant success. Nevertheless, further implementation of key collaborative practices identified by GAO could facilitate further progress. GAO is recommending that the Secretary of Commerce and the Secretary of Defense, among other things, update plans for future collaboration, establish timelines for implementing revised guidance, and fully implement key practices for interagency collaboration in the harmonization effort. In comments on a draft of this report, Commerce and DOD concurred with GAO's recommendations.
Depot maintenance is the materiel maintenance or repair requiring overhauling, upgrading, or rebuilding of parts, assemblies, or subassemblies, and the testing and reclamation of equipment, regardless of the source of funds for the maintenance or repair or the location at which the maintenance or repair is performed. The Air Force maintains three depots that are designed to retain, at a minimum, a ready, controlled source of technical competence and resources to meet military requirements. These depots work on a wide range of weapon systems and military equipment. Table 1 describes the location, principal work, workload, and number of personnel for each depot. Depot maintenance activities are complex and require deliberate planning in order to efficiently and effectively meet future requirements. Our prior work has shown that organizations need effective strategic management planning in order to identify and achieve long-term goals. We have identified key elements that should be incorporated into strategic plans to help establish a comprehensive, results-oriented management framework: 1. Mission statement: A statement that concisely summarizes what the organization does, presenting the main purposes for all its major functions and operations. 2. Long-term goals: A specific set of policy, programmatic, and management goals for the programs and operations covered in the strategic plan. The long-term goals should correspond to the purposes set forth in the mission statement and develop with greater specificity how an organization will carry out its mission. 3. Strategies to achieve the goals: A description of how the goals contained in the strategic plan are to be achieved, including the operational processes; skills and technology; and the human, capital, information, and other resources required to meet these goals. 4. Use of metrics to gauge progress: A set of metrics that will be applied to gauge progress toward attainment of each of the plan’s long-term goals. 5. Key external factors that could affect goals: Key factors external to the organization and beyond its control that could significantly affect the achievement of the long-term goals contained in the strategic plan. These external factors can include economic, demographic, social, technological, or environmental factors, as well as conditions or events that would affect the organization’s ability to achieve its goals. 6. Program evaluations: Assessments, through objective measurement and systematic analysis, of the manner and extent to which programs associated with the strategic plan achieve their intended goals. 7. Stakeholder involvement in developing the plan: Consideration of the views and suggestions—solicited during the development of the strategic plan—of those entities affected by or interested in the organization’s activities. In addition to our work on strategic planning, recent legislation has focused attention on DOD’s and the military departments’ maintenance strategies and plans. The National Defense Authorization Act for Fiscal Year 2009 requires the Secretary of Defense to contract for a study, which among other things, will address DOD’s and the military departments’ life- cycle maintenance strategies and implementation plans on a variety of topics including: outcome-based performance management objectives, workload projection, workforce, and capital investment strategies. Additionally, the act requires that the study examine “the relevant body of work performed by the Government Accountability Office.” OUSD (AT&L) officials told us that they expect the final report from this study to be delivered to Congress in December 2010. While the Air Force plan focuses Air Force efforts on weapon system and equipment operational availability, it does not fully address the elements of a results-oriented management framework, nor does it clearly link information between the two planning documents. The Air Force plan fully addresses one of the seven elements, partially addresses four elements, and does not address the remaining two elements that our prior work has shown to be critical in a comprehensive strategic plan. Table 2 summarizes the extent to which the Air Force’s depot maintenance strategic plan addresses the elements of a results-oriented management framework. Additionally, the plan’s documents are not clearly linked to one another and the relationship between corresponding sets of information in the documents is sometimes not transparent. As a result of these weaknesses, the Air Force’s ability to use its plan as a decision- making tool to meet future challenges may be limited. The plan’s depot maintenance mission statement fully addresses one of seven elements of a results-oriented management framework. The comprehensive mission statement summarizes the Air Force depots’ overarching purpose and addresses their major functions and operations. In prior reports on strategic planning, we have noted that a mission statement is important because it provides focus by explaining why an organization exists and what it does. The Air Force depots’ overarching purpose, as identified in the plan, is to “ensure that Air Force weapon systems and equipment are operational and available to support the Air Force’s mission.” This mission statement is results-oriented and corresponds with the more general department-wide mission statement in DOD’s Depot Maintenance Strategy and Implementation Plans, which states that the mission of DOD depots is to meet the national security and materiel readiness challenges of the 21st century. The Air Force’s plan partially addresses four of the results-oriented management framework elements: long-term goals; strategies to achieve the goals; use of metrics to gauge progress; and stakeholder involvement in developing the plan. With regard to the long-term goals, the plan includes five: maintain a responsive organic industrial base, ensure a highly qualified, technically competent, and professional provide facilities necessary to support existing and projected depot maintain robust public- and private-sector capabilities by leveraging partnering, and transform depot processes through continuous process improvement and logistics transformation. While the plan includes these goals, it does not specify interim goals, and it does not specify the time frames for monitoring and achieving the long- term goals. For example, the plan discusses the goal of leveraging public- private partnerships to maintain robust public- and private-sector relationships and ensure access to complementary dual depot maintenance capabilities; however, it does not identify interim goals or time frames for achieving this partnering goal. Similarly, the plan discusses the Air Force’s strategies to achieve its five long-term goals, but does not address the resources that will be needed to achieve them. For example, the plan identifies a strategy to achieve its infrastructure goal. Specifically, the plan states that the Air Force will make capital investments in its depots in order to provide them with the state of the art, environmentally compliant, efficiently configured, and properly equipped facilities to support existing and projected depot maintenance workload. However, needed resources—such as capital, equipment, and technology—are not specified to facilitate implementation of this strategy. While the plan includes some metrics, it does not discuss any metrics that directly assess the degree to which the depots are achieving the plan’s goals. The plan discusses general life-cycle performance metrics to assess overall depot performance. Air Force officials told us that these metrics indirectly gauge progress toward achieving each of the plan’s five long- term goals. For example, the plan discusses a quality defect rate metric, which measures the variance between quality deficiency reports and the quality defect rate standard, but the plan does not describe how the depots would measure or use the metric to gauge progress toward achieving one or more of the plan’s long-term goals. Air Force officials explained that a performance problem indicated by any of its metrics would lead the Air Force to monitor overall performance and then identify the relevant area (e.g., workforce) contributing to the problem. These officials told us that the Air Force would then adjust performance in the relevant area to achieve the corresponding goal. However, this indirect process is not discussed in the plan. Moreover, the plan does not discuss the desired levels for each of these metrics. While the Air Force involved many relevant stakeholders in the development of its depot maintenance strategic plan, it did not involve depot officials directly in all aspects of the process. The Air Force developed its plan primarily by using inputs from the following stakeholders: the Office of the Assistant Secretary of the Air Force for Installations, Environment, and Logistics; the Office of the Assistant Secretary of the Air Force for Acquisition; the Office of the Deputy Chief of Staff of the Air Force for Logistics, Installations and Mission Support; and Air Force Materiel Command. Air Force depot officials said they were not involved in all aspects of the development of the plan, even though their depots are directly affected by the plan. For example, depot officials indicated that they had limited or no involvement in the development of the Strategy. The Air Force’s plan does not address two of the results-oriented management framework elements: key external factors and program evaluations. The plan does not identify any key factors external to the Air Force and beyond its control that could significantly affect the achievement of its five long-term goals. Our prior work on developing a results-oriented management framework reported that external economic, demographic, social, technical, or environmental factors may influence whether an organization achieves its goals. Moreover, we noted that a strategic plan should describe each such factor and indicate how it could affect achievement of the plan’s goals. Even though the Air Force plan did not describe any such factors, Air Force officials have acknowledged elsewhere external factors that could affect depot maintenance. For example, in 2007, the Secretary and Chief of Staff of the Air Force described the harsh environments the Air Force is currently operating in— including the heat and sand in Iraq’s deserts—during testimony before the House Armed Services Committee. Further, obtaining technical data rights from private-sector manufacturers is another example of external factors not identified in the plan that could affect depot maintenance. Depot officials told us that technical data are sometimes not directly available to the depots and that without them their work is more challenging. Similarly, the plan does not identify how the Air Force will evaluate its programs and use the results of these evaluations to adjust the plan’s long- term goals and strategies to achieve desired levels of performance. The plan indicates that the Air Force must continuously validate and update its depot maintenance strategic plan to meet operational depot maintenance requirements; however, the plan does not describe the method to conduct this process. The content of the Air Force depot maintenance Strategy and Master Plan are not clearly linked to one another, which may make the collective plan difficult to use as a decision-making tool. OUSD (AT&L) instructed each service to publish its depot maintenance strategic plan in a single depot maintenance-specific document or as an integral part of one or more documents having a broader scope. Air Force officials told us that they intended the Strategy to provide the strategic vision for Air Force depot maintenance and the Master Plan to complement the Strategy by providing the details for executing the strategic vision. We found that the linkage of information in the plan’s two documents was not always clear. For example, the goals listed in the Strategy are not clearly repeated in the Master Plan, and the Master Plan includes goals that are unrelated to depot maintenance. For example, one goal the Master Plan includes is to improve the strategic acquisition of capabilities to ensure warfighters have the weapons and equipment needed to defend the United States. In addition, the Master Plan does not clearly align its content to the five long-term goals described in the Strategy. Although a table in an appendix to the Master Plan provides some information indicating how the content of the Strategy and Master Plan are aligned, the appendix does not clarify how the two documents are linked to one another or how they are used as a collective plan. An Air Force official acknowledged the weaknesses in the linkages between the plan’s two documents and said that they intend to ensure effective alignment of the plan’s documents in future versions of the plan. Additionally, Air Force officials told us that they chose not to include information in the plan that was already contained in external documents. For example, they told us that other Air Force documents (such as Air Force budget documents and the servicewide strategic plan) address key external factors that could affect the achievement of the plan’s goals. The Air Force plan, however, does not refer to these external documents. Without clear linkages between the two primary planning documents and other related documents, the Air Force may have limited utility of its plan as a decision-making tool to meet future challenges. OUSD (AT&L) did not use an effective oversight mechanism to systematically evaluate the Air Force’s plan to determine whether it fully addresses all needed elements. DOD’s Depot Maintenance Strategy and Implementation Plans states that the Depot Maintenance Working Integrated Process Team would monitor the development and subsequent execution of the services’ depot maintenance strategic plans on a continuing basis. However, that team did not review any of the services’ plans. OUSD (AT&L) officials representing the Assistant Deputy Under Secretary of Defense for Maintenance Policy and Programs told us that, in practice, the Integrated Process Team did not assume responsibility for oversight of the plan, but instead monitored selected issues that the services’ plans describe, such as the implementation of some specific process improvement initiatives. The Maintenance Policy and Programs officials told us that they reviewed the Air Force plan through a process consisting of informal meetings and conversations with service representatives. These OUSD (AT&L) officials told us that, through their review, they found that the Air Force plan was a “good first start” but did not address all needed elements. However, Air Force officials told us that they were not informed that the plan did not fully address elements of a results-oriented management framework nor were they asked to revise the plan. Additionally, Maintenance Policy and Programs officials were unable to provide us with documentation of their review of the Air Force plan. At the time the Air Force developed its plan, it lacked an effective oversight mechanism to help ensure that its plan fully addresses the elements of a results-oriented management framework and that the plan’s two documents are clearly linked to one another. Air Force headquarters officials responsible for the plan did not review the Strategy or the Master Plan to ensure that these documents fully address the elements of a results-oriented management framework. Furthermore, the Air Force headquarters officials did not provide direction to the Air Force Materiel Command (AFMC)—the office responsible for the Master Plan—on strategic planning elements that should be incorporated in the Master Plan. Also, AFMC officials told us that they received no instruction to submit the Master Plan to another Air Force office or other oversight body for review. Since the development of the current plan, the Air Force developed the Depot Maintenance Strategic Planning Integrated Process Team in June 2008 to improve its future depot maintenance strategic plans. According to the team’s charter, this process will be used to validate and update the depot maintenance strategic plan and help align the Strategy and Master Plan with one another and with DOD’s Depot Maintenance Strategy and Implementation Plans. Moreover, while the Air Force conducts monthly reviews of depot maintenance programs and they told us that these reviews help provide oversight of the plan’s implementation, these reviews do not assess the progress in achieving the plan’s long-term goals. While Air Force officials responsible for the plan acknowledged some of the plan’s incomplete information, they told us that they believe the plan more fully addresses the results-oriented management framework elements than our analysis reflects. According to these officials, although the plan does not address some elements explicitly, they are implied in the plan’s discussion of various initiatives and processes and experienced professionals involved in Air Force depot maintenance would be able to recognize these elements. However, because the plan does not explicitly address these elements, they may not be clear to individuals not involved in developing the plan. While the Air Force depot maintenance strategic plan describes many initiatives and programs important to the Air Force depots, it is not fully responsive to OUSD (AT&L)’s direction to the services that was designed to provide the services with a framework to meet future challenges. Specifically, the plan does not fully address logistics transformation, core logistics capability assurance, workforce revitalization, and capital investment—the four areas that OUSD (AT&L) directed each service, at a minimum, to include in its plan. Within these four general areas are 10 issues that OUSD (AT&L) also identified. The Air Force’s plan partially addresses 8 and does not address the remaining 2. Table 3 summarizes our evaluation of the extent to which the Air Force plan addresses each of the 10 issues. As discussed for the elements of a results-oriented management framework, OUSD (AT&L) and the Air Force did not identify missing or partially addressed issues because neither used effective oversight to help ensure that OUSD (AT&L)’s direction for developing the plan was carried out. Among other things, DOD’s Depot Maintenance Strategy and Implementation Plans states that the DOD strategy will ensure that DOD is postured to meet the national security and materiel readiness challenges of the 21st century. However, at present, information missing from the Air Force plan may limit the service’s assurance that its depots are postured and resourced to meet future maintenance requirements. The Air Force plan partially addresses each of the three logistics transformation issues that OUSD (AT&L) directed the services to discuss in their plans. In this area, OUSD (AT&L) directed the services to discuss the future roles and capabilities of the depots, transformation actions, and approaches for integrating various depot capabilities in their plans. The plan generally discusses the future roles of the depots, but it does not discuss projected future capabilities of the Air Force depots or how those capabilities will be measured. The plan states that the general role of the depots is to ensure Air Force weapon systems and equipment are operational and available to support the Air Force’s missions. However, the plan is silent on the depots’ future capabilities despite changes that DOD had planned to make to the Air Force’s force structure. For example, the February 2006 Quadrennial Defense Review Report noted that DOD had planned to reduce the number of Air Force B-52 aircraft by about 40 percent to 56. Additionally, the plan partially addresses actions the Air Force is taking to transform its depots. For example, the plan discusses continuous process improvement initiatives such as the High Velocity Maintenance program, in which the Air Force expects to schedule depot maintenance for aircraft more frequently but for shorter periods. However, the plan does not discuss how the Air Force intends to change the structure or organization of its depots to transform them to achieve the Air Force vision of the depots’ future capabilities. Moreover, the plan partially addresses the management approach for integrating various depot maintenance capabilities, including public- and private-sector sources, as well as joint, inter-service, and multinational capabilities. To address public- and private-sector sources, the plan states that partnering with the private-sector to ensure access to complementary or dual depot maintenance capabilities is an integral element of the Air Force strategy. However, the plan does not discuss the management approach for integrating joint, inter-service, or multinational capabilities. Because the plan does not discuss the approach for integrating these capabilities, it is unclear if the Air Force is positioned to reduce redundancies and take advantage of potential cost-saving measures. The Air Force plan partially addresses both core logistics capability assurance issues. For one of the two issues, the plan partially addresses the OUSD (AT&L) direction to discuss actions taken or contemplated to (1) identify core requirements upon program initiation, (2) ensure that depot source of repair decisions are made upon program initiation, (3) encourage the formation of public-private partnerships, and (4) identify and rectify core capability deficiencies. The plan describes tools the Air Force uses to identify core requirements including processes, models, and guidance. For example, the plan states that the Air Force uses the biennial core computation process and other tools to generate Air Force core requirements. To address OUSD (AT&L)’s direction to discuss depot source of repair decisions, the plan states that the Air Force uses the strategic source of repair process, the source of repair assignment process, and the depot maintenance inter-service processes. The plan also discusses public-private partnerships and states that AFMC and the depots intend to develop a standard process for public-private partnerships to ensure compliance with DOD and Air Force directives on public-private partnerships. To address OUSD (AT&L) direction to discuss actions to identify and rectify core deficiencies, the plan notes that if core target shortfalls exist, the depots will provide plans to mitigate the risk but, the plan does not explain how the Air Force will do so. Furthermore, the plan does not discuss concerns we have previously reported on DOD’s biennial core computation process. For example, we reported in 2009 that the Air Force used a method for calculating core capability deficiencies that differed from the method used by the other services and that officials from the Office of the Secretary of Defense said that the Air Force approach was not appropriate. For the second of the two core logistics capability assurance issues, estimating depot workload is partially addressed in the Air Force plan. To address the depot maintenance workload estimating portion of this issue, the plan describes a process in which Air Force organizations, such as the Centralized Asset Management Office, provide input into the workload review process. The plan goes on to state that the workload review process determines future depot workload. However, the Air Force plan does not discuss the OUSD (AT&L) direction to address the projected effects of weapon system retirements or bed-down (i.e., the act or process of locating aircraft at a particular base). However, the Air Force plans to substantially reduce some portions of its fleet. In May 2009, the Air Force announced that it would accelerate the retirement of 249 older aircraft, including 112 F-15s and 134 F-16s. While these retirements will affect the workload at the Air Force depots at Warner-Robins, Georgia, and Ogden, Utah, the Master Plan issued 2 months earlier does not include any information on the planned changes. Moreover, the plan does not discuss new aircraft that will replace those being retired, the future workload estimates associated with any potential replacement aircraft, or the processes that will be used to determine which facilities will obtain any new work. The Air Force plan partially addresses both reengineering and replenishment strategies but does not contain information on the OUSD (AT&L)-directed workforce replenishment requirements. Regarding the reengineering strategies issue, the plan discusses actions the Air Force is taking to reengineer its existing employees’ skills to satisfy new capability requirements, but it does not discuss actions the service is taking to identify new skill requirements. To address reengineering existing employees’ skills, the plan indicates that the depots are partnering with local universities and technical schools to provide training. However, it does not directly address the Air Force actions to identify new skill requirements. Instead of providing details on new skill requirements, the plan makes a general statement that the Air Force’s workforce skill capabilities are continuously assessed to determine future training and skill requirements. Likewise, it is silent on specific actions the Air Force is taking to carry out this assessment. The plan does not discuss the method the Air Force will use to forecast workforce replenishment requirements, nor the quantitative data needed to project annual hires as well as losses due to retirements and other reasons. Although the plan discusses a manpower and capability program that determines the required personnel for future work, the plan does not follow the OUSD (AT&L) direction to discuss the methods or sources of quantitative data the Air Force uses to determine turnover and the timing of the turnover. To address the replenishment strategies issues, the plan describes actions the Air Force is taking to train employees, but it does not discuss how the Air Force is recruiting new employees, nor does it discuss a comprehensive management approach for establishing and implementing an employee replenishment strategy. The plan discusses, for example, a university and vocational school partnership program to train depot employees. However, it is silent on the Air Force’s recruiting methods (e.g., for hard to fill types of positions) and any servicewide employee replenishment strategy. The Air Force plan’s limited and missing information for the three issues in the workforce revitalization area is noteworthy in the context of our previous findings on the DOD depot maintenance workforce and in the context of information in the OUSD (AT&L)’s document directing the services to provide the plans. In 2003, we reported that DOD faced significant management challenges in succession planning to maintain a skilled workforce at its depot maintenance facilities. Among other challenges, we reported that relatively high numbers of civilian workers at maintenance depots were nearing retirement age. DOD’s Depot Maintenance Strategy and Implementation Plans makes a similar point. It states that DOD’s depot maintenance community, like the rest of the federal government, faces increasing numbers of retirements as the “baby boom” generation reaches retirement eligibility. It goes on to state that the retirement-eligible population within the depot maintenance workforce and forecasted annual retirements are expected to increase annually for the remainder of the decade. This dynamic—coupled with the highly skilled nature of some depot maintenance work and the length of time required to train new employees—creates hiring, training, and retention challenges. Without a discussion that acknowledges these and other such workforce challenges, it is unclear how well the Air Force is positioned to optimally address the challenges that its depots face. The Air Force plan partially addresses the capital investment issue of quantifying current capabilities but does not address the other issue— capital investment benchmarks. Neither the benchmarks for evaluating the adequacy of investment funding nor the Air Force’s basis for selecting the benchmarks are in the Air Force’s plan despite OUSD (AT&L)’s direction to address this issue. Even though the plan does not address benchmarks, it notes that the Air Force intends to continue making an annual capital investment of at least 6 percent of revenue, as required by law, to sustain depot infrastructure requirements. Moreover, an OUSD (AT&L) official mentioned that the Air Force’s citing of the 6 percent capital investment should be seen as addressing the benchmark issue. The plan partially addresses the issues pertaining to the methods for quantitatively articulating these concerns: current capabilities, current and projected deficiencies, and the capabilities that planned investment will provide. The plan notes that the Air Force targets its investments to the highest priority needs to support the warfighter. While the plan also discusses an infrastructure investment prioritization process, it does not describe the method for prioritizing needed investments. Similarly, the plan notes that the Air Force invests in facility restorations and modernizations and discusses the Capital Purchase Program for equipment, restoration, and modernization programs for facilities, transformation initiatives, and military construction. However, the plan does not present quantitative data on the projected funding (or shortfalls) for facilities and equipment. Capital investment in DOD depots has been an issue of concern in our prior work. For example, in 2001, we reported that capital investments in depot plant equipment had declined sharply in the mid-1990s as a result of defense downsizing and depot closures and consolidations. As a result of DOD’s lack of capital investment, its depots did not keep up with the latest technologies. In subsequent years, funding levels increased as the services recognized the need to modernize their depots. OUSD (AT&L) officials told us that the primary intent of the OUSD (AT&L)’s direction was to provide a framework for the services to meet challenges in the future and that the issues identified in the four areas specified in the direction were designed to address those challenges. Further, DOD’s Depot Maintenance Strategy and Implementation Plans states that each service will conduct depot maintenance strategic planning that focuses on achieving the DOD depot maintenance strategy and that the DOD strategy will ensure that DOD is postured to meet the national security and materiel readiness challenges of the 21st century. However, the Air Force’s plan does not provide a comprehensive, results-oriented management framework to efficiently and effectively inform the Air Force’s future decisions, nor does it fully respond to OUSD (AT&L)’s direction that was designed to provide a framework for the services to overcome four general areas of future challenges. Furthermore, the limited linkage of information in the Air Force’s two planning documents may reduce the utility of the plan as a decision-making tool to meet future challenges. A primary reason for not fully addressing these framework elements and linkages in the plan was that OUSD (AT&L) and the Air Force did not have effective oversight mechanisms in place to promptly identify the incomplete information, communicate such findings to the plan developers, and monitor the revision of the plan to ensure that the limitations had been addressed. These concerns about the content, linkage, and oversight resulted in a plan that missed an opportunity to identify a more complete Air Force vision for the effective and efficient operation of its depots in the future. For example, had the Air Force identified and implemented systematic program evaluation and a thorough set of metrics to directly assess goal achievement, it would have additional tools for reacting in a timely manner to findings from the ongoing congressionally mandated study on depot capabilities. Most importantly, a comprehensive plan could have resulted in the Air Force having more assurance that its depots are viably positioned and have the maintenance workforce, equipment, facilities, and funds they need to meet current and future requirements. To provide greater assurance that Air Force depots will be postured and resourced to meet future maintenance requirements, we recommend that the Secretary of Defense direct the Secretary of the Air Force to take the following three actions to revise the Air Force’s depot maintenance strategic plan: Fully and explicitly address all elements needed for a comprehensive results-oriented management framework, including those elements that we have identified as partially addressed or not addressed in the current plan. Demonstrate clear linkages among the depot maintenance strategic plan’s component documents, should the Air Force decide to publish its revised plan in multiple documents. Fully and explicitly address OUSD (AT&L)’s direction that provides a framework for the services to meet future depot maintenance challenges. To strengthen the oversight mechanism for depot maintenance strategic planning, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics and the Secretary of the Air Force to develop and implement procedures to review revisions of the depot maintenance strategic plan to ensure they fully address all key elements of a results-oriented management framework, explicitly address any OUSD (AT&L) direction for the plans, and periodically assess progress and corrective actions to the extent needed in meeting the plans’ goals. In oral comments on a draft of this report, DOD concurred with our four recommendations to provide greater assurance that Air Force depots will be postured and resourced to meet future maintenance requirements. The department concurred with our recommendation to direct the Secretary of the Air Force to revise the Air Force’s depot maintenance strategic plan to fully and explicitly address all elements needed for a comprehensive results-oriented management framework. DOD stated that it will direct the Air Force and the other services to more clearly address all elements needed for a results-oriented strategy in the next OUSD (AT&L) request to services to update their depot maintenance strategic plans. DOD also concurred with our recommendation to direct the Secretary of the Air Force to revise the Air Force’s depot maintenance strategic plan to demonstrate clear linkages among the plan’s component documents, should the Air Force decide to publish its revised plan in multiple documents. In its response, DOD stated that it will direct the Air Force and the other services to more clearly demonstrate the linkages of the Air Force plan to the DOD depot maintenance strategic plan in the next OUSD (AT&L) request to the services to update their depot maintenance strategic plans. While the department concurred with our recommendation, it did not discuss directing the Air Force to more clearly demonstrate linkages among the Air Force plan’s component documents, which was the focus of our recommendation. Therefore, DOD may need to take further action to explicitly direct the Secretary of the Air Force to more clearly demonstrate linkages among the Air Force plan’s component documents, should the Air Force decide to publish its revised plan in multiple documents. The department also concurred with our recommendation to direct the Secretary of the Air Force to revise the Air Force’s depot maintenance strategic plan to fully and explicitly address OUSD’s (AT&L) direction that provides a framework for the services to meet future depot maintenance challenges. DOD stated that it will direct the Air Force and the other services to explicitly address the OUSD (AT&L) direction for depot maintenance strategic planning in the next OUSD (AT&L) request to the services to update to their depot maintenance strategic plans. Additionally, DOD concurred with our recommendation to direct the Under Secretary of Defense for Acquisition, Technology and Logistics and the Secretary of the Air Force to develop and implement procedures to review revisions of the depot maintenance strategic plan to ensure they fully address all key elements of a results-oriented management framework, explicitly address any OUSD (AT&L) direction for the plans, and periodically assess progress and corrective actions to the extent needed in meeting the plan’s goals. In its response, DOD stated that it will direct the Air Force and the other services to explicitly address the procedures noted in our recommendation. DOD also said that OUSD (AT&L) would further develop a process to periodically assess progress and corrective actions to ensure the Air Force and the other services are meeting OUSD (AT&L) and their own plan’s goals. DOD also provided technical comments that we have incorporated into this report where applicable. We are sending copies of this report to the Secretary of Defense and the Secretary of the Air Force. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-8246 or edwardsj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In this report, we addressed two questions: (1) To what extent does the Air Force’s depot maintenance strategic plan address key elements of a results-oriented management framework? and (2) To what extent does the Air Force’s depot maintenance strategic plan address direction from the Office of the Under Secretary of Defense for Acquisitions, Technology, and Logistics (OUSD (AT&L)) that was designed to provide a framework for the services to meet future challenges? We limited the scope of our analysis to the current Air Force depot maintenance strategic plan, which includes both the April 2008 Air Force Depot Maintenance Strategic Plan and the March 2009 Air Force Depot Maintenance Master Plan. We used the same set of methodological procedures to answer both questions and each type of procedure was performed simultaneously for the two questions. For our analysis, we first reviewed relevant laws; Department of Defense (DOD) and Air Force regulations governing depot maintenance; and depot maintenance-related reports issued by agencies and organizations including GAO, DOD, the Logistics Management Institute, and RAND. We then used qualitative content analyses to compare the Air Force plan against criteria from the seven elements of a results-oriented management framework and the 10 issues listed in the OUSD (AT&L) direction for depot maintenance strategic plans. To conduct these analyses, we first developed a data collection instrument that incorporated these two types of criteria. One team member then analyzed the plan using this instrument. To verify preliminary observations from this initial analysis, a second team member concurrently conducted an independent analysis of the plan. We compared observations of the two analysts and discussed any differences. We reconciled the differences with the assistance of analysts from the team that was evaluating the Navy depot maintenance strategic plan. We subsequently met with Air Force officials to confirm our understanding of the plan and sought additional information where our preliminary analyses revealed that the plan partially addresses or does not address the criteria. We also interviewed and obtained documentary evidence from relevant OUSD (AT&L) officials regarding its oversight of the services’ plans. We additionally interviewed depot leaders and strategic planning personnel during site visits at two of the three Air Force depots to obtain first-hand information on issues the depots face. We also obtained data on workload and personnel from the Air Force and determined that these data were sufficiently reliable for our report. The organizations we interviewed are listed in table 4. We conducted this performance audit from July 2009 through May 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Key contributors to this report were Sandra B. Burrell, Assistant Director; James P. Klein; Ron La Due Lake; Joanne Landesman; Brian Mazanec; Michael Willems; and Elizabeth Wood. Depot Maintenance: Improved Strategic Planning Needed to Ensure That Army and Marine Corps Depots Can Meet Future Maintenance Requirements. GAO-09-865. Washington, D.C.: September 17, 2009. Depot Maintenance: Actions Needed to Identify and Establish Core Capability at Military Depots. GAO-09-83. Washington, D.C.: May 14, 2009. Depot Maintenance: DOD’s Report to Congress on Its Public-Private Partnerships at Its Centers of Industrial and Technical Excellence (CITEs) Is Not Complete and Additional Information Would Be Useful. GAO-08-902R. Washington, D.C.: July 1, 2008. Depot Maintenance: Issues and Options for Reporting on Military Depots. GAO-08-761R. Washington, D.C.: May 15, 2008. Depot Maintenance: Actions Needed to Provide More Consistent Funding Allocation Data to Congress. GAO-07-126. Washington, D.C.: November 30, 2006. DOD Civilian Personnel: Improved Strategic Planning Needed to Help Ensure Viability of DOD’s Civilian Industrial Workforce. GAO-03-472. Washington, D.C.: April 30, 2003.
The Air Force's maintenance depots provide critical support to ongoing operations around the world. Previously, the Department of Defense's (DOD) increased reliance on the private sector for depot maintenance support, coupled with downsizing, led to a general deterioration in the capabilities, reliability, and cost-effectiveness of the military services' depots. In March 2007, the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics (OUSD (AT&L)) directed each service to submit a depot maintenance strategic plan and provided direction for the content of those plans. The Air Force issued two documents in response to this direction--a Strategy and a Master Plan. GAO used qualitative content analyses to determine the extent to which the Air Force's collective plan addresses (1) key elements of a results-oriented management framework and (2) OUSD's (AT&L) direction for the plan's content. While the Air Force plan focuses efforts on weapon system and equipment operational availability, it does not fully address the elements of a results-oriented management framework, nor does it clearly link information between the plan's two component documents. GAO's prior work has shown that seven elements of a results-oriented management framework are critical for comprehensive strategic planning. The plan fully addresses one of these elements by including a mission statement that summarizes the Air Force depots' major functions and operations, but it partially addresses or does not address the remaining six elements. For example, while the plan describes goals for the depots' mission-related functions, it does not provide time frames to achieve them. Additionally, the plan does not discuss any factors beyond the Air Force's control that could affect its ability to achieve the plan's goals nor does it identify how the Air Force will evaluate its programs and use the results of such evaluations to adjust the plan's long-term goals and strategies to achieve desired levels of performance. Moreover, the content of the plan's two component documents are not clearly linked to one another. For example, the goals listed in the Strategy are not clearly repeated in the Master Plan, and the Master Plan includes goals that are unrelated to depot maintenance. Nor does the Master Plan clearly align its content to the five long-term goals described in the Strategy. The plan does not fully address the elements of a results-oriented management framework and the plan's two documents are not clearly linked to one another in part because of weaknesses in oversight. Specifically, although OUSD (AT&L) established an oversight body, which included senior representatives from OUSD (AT&L) and the services, to review the services' plans, this body did not review the plan. Also, the Air Force did not establish an oversight mechanism to review its plan. The plan's weaknesses may limit the Air Force's ability to use its plan as a tool to meet future challenges. In addition, the Air Force plan is not fully responsive to OUSD's (AT&L) direction to the services that was designed to provide the services with a framework to meet future challenges. OUSD (AT&L) directed the services to address 10 specific issues in four general areas: logistics transformation, core logistics capability assurance, workforce revitalization, and capital investment. The plan partially addresses 8 of these issues and does not address the remaining two. For example, while the plan notes that the Air Force is partnering with local universities and technical schools to provide training to reengineer existing employees' skills, the plan does not address Air Force actions to identify new and emerging skill requirements, as directed. Furthermore, the plan does not discuss any benchmarks to evaluate the adequacy of investment funding, as directed. As discussed for the elements of a results-oriented management framework, the plan does not fully respond to OUSD (AT&L)'s direction for the plan's content in part because of weaknesses in oversight in both OUSD (AT&L) and the Air Force. The plan's shortcomings may limit the Air Force's assurance that its depots are postured and resourced to meet future maintenance challenges.
DOE’s Office of Fossil Energy oversees research on key coal technologies, but DOE does not systematically assess the maturity of those technologies. Using TRLs we developed for these technologies, we found consensus among stakeholders that CCS is less mature than efficiency technologies. Although federal standards for internal control require agency managers to compare actual program performance to planned or expected results and analyze significant differences, we found that DOE’s Office of Fossil Energy does not systematically assess the maturity of key coal technologies as they progress toward commercialization. While DOE officials reported that individual programs are aware of the maturity of technologies and DOE publishes reports that assess the technical and economic feasibility of advanced coal technologies, we found that the Office of Fossil Energy does not use a standard set of benchmarks or terms to describe or report on the maturity of technologies. In addition, DOE’s goals for advancing these technologies sometimes use terms that are not well defined. The lack of such benchmarks or an assessment of the maturity of key coal technologies and whether they are achieving planned or desired results limits: DOE’s ability to provide a clear picture of the maturity of these technologies to policymakers, utilities officials, and others; congressional and other oversight of the hundreds of millions of dollars DOE is spending on these technologies; and policymakers’ ability to assess the maturity of CCS and the resources that might be needed to achieve commercial deployment. Other agencies similarly charged with developing technologies, such as NASA and the Department of Defense (DOD), use TRLs to characterize the maturity of technologies. Table 1 shows a description of TRLs used by NASA. DOE has acknowledged that TRLs can play a key role in assessing the maturity of technologies during the contracting process. The agency recently issued a Technology Readiness Assessment Guide, which lays out three key steps to conducting technology readiness assessments during the contracting process. Identify critical technology elements that are essential to the successful operation of the facility. Assess maturity of these critical technologies using TRLs. Develop a technology maturity plan which identifies activities required to bring technology to desired TRL level. Although use of the Guide is not mandatory, DOE’s Office of Environmental Management uses the Guide as part of managing its procurement activities––a result of a GAO recommendation––and its Office of Nuclear Energy has begun using TRLs to measure and communicate risks associated with using critical technologies in a novel way. Furthermore, the National Nuclear Security Administration has used TRLs recently as well. In the absence of an assessment from DOE, we asked stakeholders to gauge the maturity of coal technologies using a scale we developed based on TRLs. Table 2 shows the TRLs we developed for coal technologies by adapting the NASA TRLs. for EOR use. However, this plant does not produce electricity. plants have already been demonstrated commercially. For example, a number of ultrasupercritical plants ranging from 600 to more than 1,000 MW have been built or are under construction in Europe and Asia, and there are five IGCC plants in operation around the world, including two in the United States. Commercial deployment of CCS within 10 to 15 years is possible according to DOE and other stakeholders, but is contingent on overcoming a variety of economic, technical, and legal challenges. Many technologies to improve plant efficiency have been used and are available for commercial use now, but still face challenges. injection wells can be permitted as Class I (injections of hazardous wastes, industrial nonhazardous wastes, municipal wastewater) or Class V wells (injections not included in other classes, including wells used in experimental technologies such as pilot CO for geologic sequestration. endangerment of underground sources of drinking water until the operator meets all the closure and post-closure requirements and EPA approves site closure of the well. According to EPA, once site closure is approved, well operators will only be liable under the SDWA if they violate or fail to comply with EPA orders in situations where an imminent and substantial endangerment to health is posed by a contaminant that is in or likely to enter an underground source of drinking water. EPA plans to finalize the geologic sequestration rule in fall 2010. Neither the proposed rule nor the final rule will address liability for unintended releases of stored CO emissions in lieu of more efficient coal plants. In addition, some higher efficiency plant designs also face technical challenges in that they require more advanced materials than are currently available. For example, “advanced” ultrasupercritical plants require development of metal alloys to withstand steam temperatures that could be 300 to 500 degrees Fahrenheit higher than today’s ultrasupercritical plants according to DOE. From a legal perspective, most stakeholders reported that making efficiency upgrades to the existing fleet of coal power plants was limited by the prospect of triggering the Clean Air Act’s New Source Review (NSR) requirements––additional requirements that may apply when a plant makes a major modification, a physical or operational change that would result in a significant net increase in emissions. emissions than efficiency improvements alone but could raise electricity costs, increase demand for water, and could affect the ability of individual plants to operate reliably. Technologies to improve plant efficiency offer potential near-term reductions, but also raise some concerns. According to key reports and stakeholders, the successful deployment of CCS technologies is critical to helping the United States meet potential limits in greenhouse gas emissions. In addition, CCS could allow coal to remain part of the nation’s diverse fuel mix. IEA estimated that CCS technologies could meet 20 percent of reductions needed to reduce global CO This report also noted that the cost of meeting this goal would increase if CCS was not deployed. Massachusetts Institute of Technology (MIT) researchers called CCS the “critical enabling technology” to reduce CO emissions while allowing continued use of coal in the future. In 2009, NAS reported that if CCS technologies are not demonstrated commercially in the next decade, the electricity sector could move more towards using natural gas to meet emissions targets. Our past work has also found that switching from coal to natural gas can lead to higher fuel costs and increased exposure to the greater price volatility of natural gas. On the other hand, most stakeholders told us that CCS would increase electricity prices, and key reports raise similar concerns. MIT estimated that plants with post-combustion capture have 61 percent higher cost of electricity, and IGCC plants with pre-combustion capture have a 27 percent higher cost compared to plants without these technologies. Similarly, DOE estimated that plants with post-combustion capture have 83 percent higher cost of electricity, while IGCC plants with pre- combustion capture having a 36 percent higher cost. DOE has also raised concerns about CCS and water consumption. Specifically, DOE estimated that post-combustion capture technology could almost double water consumption at a coal plant, while pre-combustion capture would increase water use by 73 percent. Some utility officials also said CCS could lead to a decline in the ability of individual plants to operate reliably because a power plant might need to shut down if any of the three components (capture, transport, and storage) of CCS became unavailable. In addition, more electricity sources would need to make up for the higher parasitic load associated with CCS. The National Coal Council has also reported temporary declines in reliability during past deployments of new coal technologies. Emission Goals with 21st Century Technologies (Washington, D.C., December 2009). We provided a draft of our report to the Secretary of Energy and the Administrator of EPA for review and comment. In addition, we provided selected slides on reliability of electricity supply to NERC for comment. We received written comments from DOE’s Assistant Secretary of the Office of Fossil Energy, which are reproduced in appendix III. The Assistant Secretary concurred with our recommendation, stating that DOE could improve its process for providing a clearer picture of technology maturity and that it planned to conduct a formal TRL assessment of coal technologies in the near future. The Assistant Secretary also provided technical comments, which we have incorporated as appropriate. In addition, EPA and NERC provided technical comments, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, Secretary of Energy, Administrator of EPA, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-3841 or gaffiganm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. all U.S. emissions of COCO2 is the most prevalent greenhouse gas (GHG) program and interviewed senior DOE staff on these Visited coal power plants and research facilities in three selected states—AL, MD, and WV4 We conducted this performance audit from July 2009 through May 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We selected this nonprobability sample of states because they contained projects involving advanced coal technologies. To conduct this work, we reviewed key reports including those from the Department of Energy’s (DOE) national laboratories, the National Academy of Sciences, International Energy Agency (IEA), Intergovernmental Panel on Climate Change, Global CCS Institute, the National Coal Council, and academic reports. To identify stakeholders’ views on these technologies, we conducted initial scoping interviews with power plant operators, technology vendors, and federal officials from the Environmental Protection Agency (EPA) and DOE. Following this initial round of interviews, we selected a group of 19 stakeholders with expertise in carbon capture and storage (CCS) or technologies to improve coal plant efficiency and asked them a set of standard questions. This group of stakeholders included representatives from major utilities that are planning or implementing projects that use these technologies, technology vendors that are developing these technologies, federal officials that are providing research, development, and demonstration funding for these technologies, and researchers from academia or industry that are actively researching these technologies. During these interviews, we asked stakeholders to describe the maturity of technologies in terms of a scale we developed, based on Technology Readiness Levels (TRL). TRLs are a tool developed by the National Aeronautics and Space Administration and used by various federal agencies to rate the extent to which technologies have been demonstrated to work as intended using a scale of 1 to 9. In developing TRLs for coal technologies, we consulted with the Electric Power Research Institute (EPRI), which had recently used a similar approach to examine the maturity of coal technologies. Specifically, EPRI developed specific benchmarks to describe TRLs in the context of a commercial scale coal power plant. For example, they defined TRL 8 as demonstration at more than 25 percent the size of a commercial scale plant. We applied these benchmarks to a commercial scale power plant, which we defined as 500 megawatts (MW) and emitting about 3 millions tons of carbon dioxide (CO) annually. We based this definition on some of the key reports we reviewed, which used 500 MW as a standard power plant, and stated that such a plant would emit about 3 million tons of CO emissions from a power plant can vary based on a variety of factors, including the amount of time that a power plant is operated. We also reviewed available data on the use of key coal technologies compiled by IEA and the Global CCS Institute. The following are GAO’s comments on the Department of Energy’s letter dated June 4, 2010. In addition to the contact names above, key contributors to this report included Jon Ludwigson (Assistant Director), Chloe Brown, Scott Heacock, Alison O’Neill, Kiki Theodoropoulos, and Jarrod West. Important assistance was also provided by Chuck Bausell, Nirmal Chaudhary, Cindy Gilbert, Madhav Panwar, and Jeanette Soares.
Coal power plants generate about half of the United States' electricity and are expected to remain a key energy source. Coal power plants also account for about one-third of the nation's emissions of carbon dioxide (CO2 ), the primary greenhouse gas that experts believe contributes to climate change. Current regulatory efforts and proposed legislation that seek to reduce CO2 emissions could affect coal power plants. Two key technologies show potential for reducing CO2 emissions: (1) carbon capture and storage (CCS), which involves capturing and storing CO2 in geologic formations, and (2) plant efficiency improvements that allow plants to use less coal. The Department of Energy (DOE) plays a key role in accelerating the commercial availability of these technologies and devoted more than $600 million to them in fiscal year 2009. Congress asked GAO to examine (1) the maturity of these technologies; (2) their potential for commercial use, and any challenges to their use; and (3) possible implications of deploying these technologies. To conduct this work, GAO reviewed reports and interviewed stakeholders with expertise in coal technologies. DOE does not systematically assess the maturity of key coal technologies, but GAO found consensus among stakeholders that CCS is less mature than efficiency technologies. Specifically, DOE does not use a standard set of benchmarks or terms to describe the maturity of technologies, limiting its ability to provide key information to Congress, utilities, and other stakeholders. This lack of information limits congressional oversight of DOE's expenditures on these efforts, and it hampers policymakers' efforts to gauge the maturity of these technologies as they consider climate change policies. In the absence of this information from DOE, GAO interviewed stakeholders with expertise in CCS or efficiency technologies to identify their views on the maturity of these technologies. Stakeholders told GAO that while components of CCS have been used commercially in other industries, their application remains at a small scale in coal power plants, with only one fully integrated CCS project operating at a coal plant. Efficiency technologies, on the other hand, are in wider commercial use. Commercial deployment of CCS is possible within 10 to 15 years while many efficiency technologies have been used and are available for use now. Use of both technologies is, however, contingent on overcoming a variety of economic, technical, and legal challenges. In particular, with respect to CCS, stakeholders highlighted the large costs to install and operate current CCS technologies, the fact that large scale demonstration of CCS is needed in coal plants, and the lack of a national carbon policy to reduce CO2 emissions or a legal framework to govern liability for the permanent storage of large amounts of CO2. With respect to efficiency improvements, stakeholders highlighted the high cost to build or upgrade such coal plants, the fact that some upgrades require highly technical materials, and plant operators' concerns that changes to the existing fleet of coal power plants could trigger additional regulatory requirements. CCS technologies offer more potential to reduce CO2 emissions than efficiency improvements alone, and both could raise electricity costs and have other effects. According to reports and stakeholders, the successful deployment of CCS technologies is critical to meeting the ambitious emissions reductions that are currently being considered in the United States while retaining coal as a fuel source. Most stakeholders told GAO that CCS would increase electricity costs, and some reports estimate that current CCS technologies would increase electricity costs by about 30 to 80 percent at plants using these technologies. DOE has also reported that CCS could increase water consumption at power plants. Efficiency improvements offer more potential for near term reductions in CO2 emissions, but they cannot reduce CO2 emissions from a coal plant to the same extent as CCS. GAO recommends that DOE develop a standard set of benchmarks to gauge and report to Congress on the maturity of key technologies. In commenting on a draft of this report, DOE concurred with our recommendation.
Within FDA, CDER oversees the switch of drugs from prescription to OTC. Generally, prescription drugs are drugs that are safe for use only under the supervision of a health care practitioner. Approved prescription drugs that no longer require such supervision may be marketed OTC. In applying this standard, FDA will authorize a prescription-to-OTC switch only after it is determined that the drug in question has met the following FDA criteria: (1) it has an acceptable safety profile based on prescription use and experience; (2) it has a low potential to be abused; (3) it has an appropriate safety and therapeutic index; (4) it has a positive benefit–risk assessment; and (5) it is needed for a condition or illness that is self- recognizable, self-limiting, and requires minimal intervention by a health care practitioner for treatment. FDA tries to determine if the OTC availability of a prescription drug will prevent or delay someone from seeking needed medical attention. One class of OTC drugs switched from prescription status, the nicotine products (such as Nicorette gum), has restricted access based on age— they are available OTC only to persons 18 years of age or older. Generally, drugs considered for a prescription-to-OTC switch involving the same indication, strength, dose, duration of use, dosage form, patient population, and route of administration as the prescription drug require fewer new studies regarding safety and efficacy because such studies have already been submitted as part of the original new drug application (NDA). FDA also requires sponsors to address concerns related to consumers’ ability to self-diagnose and self-treat the condition. Thus, sponsors generally submit additional studies, such as an actual use study, which examines consumers’ ability to self-diagnose, and a label comprehension study, which examines how consumers interpret the drug’s proposed label. In addition to these actual use and label comprehension studies, FDA requires sponsors to submit updated safety information on adverse events reported for the prescription form of the drug. Figure 1 shows the flow of an OTC switch application of a first-in-a-class drug through the decision process within CDER. To begin the process for a prescription-to-OTC switch, the sponsor submits an efficacy supplement to an approved NDA. This sNDA is sent to the FDA Office of Drug Evaluation that oversaw the original NDA and usually is the office with relevant expertise. This Office of Drug Evaluation is generally responsible for reviews of the primary effectiveness data and safety results. After an application has been determined to be complete, a reviewer from this office assesses the design, general effectiveness, and safety of the product. If the application is determined to be incomplete, this office will issue a “refusal to file” letter to the sponsor, detailing the omissions or inadequacies that led to this decision. When an Office of Drug Evaluation with relevant expertise receives a fileable sNDA for an OTC drug switch, it notifies the Office of Drug Evaluation V and its Division of Over-the-Counter Drug Products, which has relevant expertise in OTC drug products. Generally, the Office of Drug Evaluation V oversees the review of (1) the suitability of the product for OTC use and (2) safety experiences during the marketing of the prescription product. A reviewer from this office assesses studies related to OTC marketing, including the actual use and label comprehension studies. CDER’s Office of Drug Safety conducts additional reviews of the label comprehension studies, reviews postmarketing safety data of the prescription drug, and provides reports to reviewing staff in other offices upon request. FDA can convene advisory committee meetings for prescription-to-OTC switch applications. Advisory committees include outside experts, such as medical professionals and researchers, who provide FDA with independent advice and recommendations. Members review data submitted by the sponsor or presented by FDA review staff, address questions, and vote, either supporting or opposing a switch from prescription-to-OTC status. Advisory committees conduct open meetings and offer members of the public the opportunity to express their views. FDA considers the advisory committees’ recommendations in its deliberations. However, the agency decides whether to adopt these recommendations on a case-by-case basis and is not required to follow the committees’ recommendations. FDA review staff from the appropriate offices of drug evaluation review the data presented, interpret the findings, and make recommendations to the respective office directors on whether the proposed OTC switch should be approved. Once these reviews are completed, they are sent to the directors of both the office of drug evaluation with relevant expertise and the Office of Drug Evaluation V. If both directors agree with each others’ review recommendation, the directors of the relevant offices of drug evaluation prepare an action package and an appropriate action letter for review, concurrence, and their final signatures. If the office directors do not concur on the decision, the application is reviewed by the Office of New Drugs. The Director of CDER is not directly involved in the approval of all drugs, but may overrule the decisions of subordinate officials. The authority to approve an OTC switch application ultimately rests with the Secretary of Health and Human Services. This approval authority is delegated to the Commissioner of FDA, then to other high-level management officials, and eventually to other FDA officials within lower levels of the agency. This delegated authority allows decisions to be made at lower levels within the agency but assumes that management agrees with these decisions. The FDA Commissioner and other officials within the Office of the Commissioner usually do not have a role in OTC switch decisions, but have the authority to overrule the decisions of other FDA officials. There are several types of contraceptive drugs and devices, including barrier methods, intrauterine devices, spermicides, and hormonal methods. Several types of hormonal methods of contraception are available, including birth control pills, injectable hormones, hormonal implants, and ECPs. FDA has approved two ECPs, Preven and Plan B, for use by prescription, and Plan B is the first drug in its class to go through the review process by FDA to determine whether it should be allowed to be sold OTC. ECPs are high dose birth control pills and have been available by prescription since 1998, when FDA approved Preven, a dedicated combined ECP containing the hormones estrogen and progestin. Prior to 1998, many physicians instructed patients to take higher doses of oral contraceptive pills for emergency contraception, an “off-label” use. Plan B is a dedicated ECP containing only levonorgestrel, a type of progestin. The Plan B regimen is a two-pill dose of levonorgestrel (0.75 mg each) that is most effective when the first pill is taken as soon as possible, but no later than 72 hours, after contraceptive failure or unprotected intercourse. The second pill is taken 12 hours after the first pill. Research suggests that a levonorgestrel-only hormone regimen, such as Plan B, can reduce the risk of pregnancy by 89 percent if taken within the 72-hour window. The time constraint for maximum effectiveness associated with Plan B has led many in the medical community and some reproductive health advocates to support switching Plan B to OTC, making it more readily available when needed. In addition, levonorgestrel-only regimens, such as Plan B, have fewer side effects than the combined ECP, reducing the incidence of two common side effects, nausea and vomiting, by 50 percent and 70 percent, respectively. Research has shown that levonorgestrel-only hormonal emergency contraception, such as Plan B, interferes with prefertilization events. It reduces the number of sperm cells in the uterine cavity, immobilizes sperm, and impedes further passage of sperm cells into the uterine cavity. In addition, levonorgestrel has the capacity to delay or prevent ovulation from occurring. ECPs have not been shown to cause a postfertilization event—a change in the uterus that could interfere with implantation of a fertilized egg. Some researchers argue that an interference with the implantation of a fertilized egg is unlikely to happen because progestins, whether natural or synthetic, help to sustain pregnancy. In addition, there is no evidence that one burst of levonorgestrel without estrogen can prevent implantation. However, researchers have concluded that the possibility of a postfertilization event cannot be ruled out, noting that it would be unethical and logistically difficult to conduct the necessary research. ECPs, including Plan B, do not interfere with an established pregnancy. On May 6, 2004, the Acting Director of CDER rejected the recommendations of a joint advisory committee and FDA review officials and signed the not-approvable letter for the Plan B OTC switch application. Four aspects of FDA’s review process were unusual: officials who would normally have been responsible for signing an action letter disagreed with the decision and did not sign the not-approvable letter for Plan B; high-level management was more involved than for other OTC switch applications; conflicting accounts exist of whether the decision to not approve the application was made before the reviews were completed; and the rationale for the not-approvable decision was novel and did not follow FDA’s traditional practices. On May 6, 2004, the Acting Director of CDER rejected the recommendations of a joint advisory committee and FDA review officials by signing the not-approvable letter for the Plan B OTC switch application. This action concluded a review process that began on April 16, 2003, when WCC submitted a standard sNDA requesting that Plan B be made available without a prescription. In the OTC switch application, the proposed OTC dose and administration schedule were identical to that for Plan B’s prescription use. The application also included an actual use study and a label comprehension study to assess potential users’ understanding of how to administer the product. Following FDA’s procedures for a review of an OTC switch application, the sNDA was submitted to the Office of Drug Evaluation III—which includes the Division of Reproductive and Urologic Drug Products, whose staff also reviewed the prescription Plan B application. Table 1 includes a brief timeline of events involving Plan B and the initial OTC switch application. (See app. III for a more detailed timeline.) On June 9, 2003, review staff within the Office of Drug Evaluation III determined the Plan B sNDA to be fileable and accepted it for review. The sNDA was then submitted to the Office of Drug Evaluation V—which includes the Division of Over-the-Counter Drug Products, whose staff have expertise with OTC drugs—for concurrent review, also in accordance with FDA’s review procedures. FDA also convened a joint public meeting of two of its advisory committees—the NDAC and the ACRHD—during which the committees’ members reviewed documentation and voted on answers to specific questions asked by FDA review staff from both offices, including whether Plan B should be granted OTC marketing status. On December 16, 2003, the members of the joint advisory committee voted 23 to 4 to recommend approving a switch in Plan B’s marketing status from prescription to OTC. Members of the joint advisory committee also voted on other aspects of the Plan B application. For example, members voted 27 to 1 that Plan B could be appropriately used as recommended by the label and that the actual use data were generalizable to the overall population, including adolescents. A meeting was held on January 15, 2004, between officials within the office of the CDER Director and review staff within the Offices of Drug Evaluation III and V about the Office of the Commissioner’s position on the acceptability of the Plan B OTC switch application. FDA’s minutes from this meeting stated that the Acting Director of CDER informed review staff that a not-approvable letter was “recommended” based on the need for more data to clearly establish appropriate use in younger adolescents. Meeting minutes also stated that the Acting Director of CDER raised multiple issues, including the “very limited data” on younger adolescents in the actual use and label comprehension studies and concerns about younger adolescents’ ability to appropriately use Plan B without a learned intermediary, such as a physician. The minutes also noted that the Acting Director of CDER raised possible options to address these concerns, including asking the sponsor to collect more data to show appropriate use by those 18 years of age and under or by limiting the availability of the product by, for example, restricting distribution to minors or restricting pharmacy access to a behind-the-counter option. According to review staff within the Offices of Drug Evaluation III and V who we spoke with and as documented in their respective reviews, at this January 2004 meeting the Acting Director of CDER also told them that the decision on the Plan B OTC switch application would be made at a “level higher than them .” At this January 2004, meeting, review staff said they also told the Acting Director of CDER that they had not yet completed their reviews and that additional data existed on the use of ECPs in younger adolescents of which high-level management might not be aware. According to meeting minutes, it was agreed that review staff would complete their reviews as well as obtain these data and present them to the Commissioner, who had expressed a willingness to meet with review staff to further discuss the data and these concerns. Review staff told us they then requested additional data from the sponsor and contacted academic researchers in the United States as well as international researchers about ongoing studies examining younger adolescents and behavioral changes associated with increased access to ECPs. Review staff identified five additional studies in which ECPs were provided in advance to study participants. Review staff also reevaluated data previously submitted with the Plan B OTC switch application. On February 18, 2004, review staff within the Offices of Drug Evaluation III and V presented their findings to high-level management, including the Commissioner and the Acting Director of CDER. According to interviews with officials from the Office of New Drugs and review staff within the Offices of Drug Evaluation III and V, and as documented in their respective reviews of the Plan B application, they said these data provided sufficient evidence that there was neither an increase in risky behaviors nor any difference in appropriate use between younger adolescents and older populations. According to FDA’s minutes of this meeting, the Commissioner expressed multiple points, including the potential for changes in future contraceptive behaviors after adolescents took Plan B and that counseling by a learned intermediary might be beneficial, particularly for adolescents. He also noted that he was not convinced that the additional studies used as evidence had “enough power” to determine if behavioral differences existed between adults and adolescents. According to the minutes, the meeting ended with the conclusion that CDER staff would continue working with the sponsor on a “marketing plan to limit availability of the product over the counter and to consider the most appropriate age groups to be restricted from access to the product.” In addition, according to meeting minutes, the Commissioner requested a “rapid action” on the Plan B OTC switch application. Aspects of FDA’s review of the Plan B OTC switch application were unusual compared to the agency’s regular review process. First, the FDA officials who would normally sign an action letter for an OTC switch application disagreed with the decision and did not sign the Plan B not- approvable letter; as a result, the Acting Director of CDER did so. Second, the review process for the Plan B OTC switch application was marked by a level of involvement by FDA high-level management that has not been typical for OTC switch applications. Third, conflicting accounts exist regarding when the decision to deny the application was made. Finally, the Acting Director of CDER’s rationale for denying the application was novel for an OTC switch decision. By early April 2004, the reviews from the Offices of Drug Evaluation III and V were completed. The directors of these offices agreed with the recommendations of the joint advisory committee and review staff that Plan B should be made available without a prescription. Nonetheless, the office directors told us that they were asked by high-level management to draft a not-approvable letter. Both office directors also told us they did not agree with a not-approvable action and did not sign the not-approvable letter. The issue was then raised to the Office of New Drugs. The Director of the Office of New Drugs reviewed the staff’s analysis of the application and concurred with the recommendations of both office directors. He also did not sign the not-approvable letter. The Director of the Office of New Drugs told us that it was “very, very rare” that his office would become involved in the signing of an action letter. According to FDA manuals of policies and procedures and The CDER Handbook, the Office of New Drugs would review decisions from the offices of drug evaluation only if there was disagreement between these two reviewing offices. In the case of Plan B, there was no disagreement between the two reviewing offices of drug evaluation on the approvability of the application. The Acting Director of CDER signed the not-approvable letter, which was issued on May 6, 2004. According to FDA, the Acting Director of CDER did not ask the Directors of the Offices of Drug Evaluation III and V or the Director of the Office of New Drugs to sign the not-approvable letter, nor was the letter presented to them for their signature, because it was known that they did not agree with the not-approvable action. High-level FDA management became more involved than usual in the review process for the Plan B OTC switch application. According to review staff within the Offices of Drug Evaluation III and V that we spoke with and as documented in their respective reviews, at a meeting held on January 15, 2004, the Acting Director of CDER informed them that the decision for the Plan B OTC switch application would be made by high- level management. This action removed decision-making authority from the directors of the reviewing offices who would normally make the decision. According to minutes from a subsequent meeting between review officials and the sponsor on January 23, 2004, the Director of the Office of New Drugs informed the sponsor that such a high-level decision was not typical of CDER’s procedures for drug approvals. The Acting Director of CDER told us that management needed to be comfortable with review staff’s final decision because of the high visibility and sensitivity of the Plan B OTC switch application. He and other senior FDA officials told us that involvement by high-level management stemmed from the agency’s practice of delegated authority. In addition to highly visible and sensitive cases, they said that the Commissioner and the Director of CDER would also generally become involved in cases that would potentially have a far-reaching impact or in cases in which management had a different view or disagreed with review staff. Although such cases are rare, FDA officials cited other examples when high-level management was more involved in the review process for a drug application than normal—the approval of thalidomide for the treatment of leprosy in 1998 and the approval of mifepristone for the termination of early pregnancy in 2000. Unlike Plan B, the examples FDA officials provided us did not involve OTC switch applications. FDA officials gave conflicting accounts of when the not-approvable decision for the Plan B OTC switch application was made. FDA officials, including the Director and Deputy Director of the Office of New Drugs and the Directors of the Offices of Drug Evaluation III and V, told us that they were told by high-level management that the Plan B OTC switch application would be denied months before staff had completed their reviews of the application. The Director and Deputy Director of the Office of New Drugs told us that they were told by the Acting Deputy Commissioner for Operations and the Acting Director of CDER, after the Plan B public meeting in December 2003, that the decision on the Plan B application would be not-approvable. They informed us that they were also told that the direction for this decision came from the Office of the Commissioner. The Acting Deputy Commissioner for Operations and the Acting Director of CDER denied that they had said that the application would not be approved. In addition, although minutes of the January 15, 2004, meeting stated that the Acting Director told review staff that a not- approvable decision was “recommended,” review staff documented that they were told at this meeting that the decision would be not-approvable. Both office reviews were not completed until April 2004. However, the Acting Director of CDER told us that he made the decision to not approve the Plan B OTC switch application shortly before signing the action letter. He also informed us that his decision was made in consultation with other high-level management officials, including the Commissioner and the Acting Deputy Commissioner for Operations, but that he was not directed to reach a particular decision. The Acting Director also told us that these high-level management officials agreed with his decision. When we asked the Acting Director about his meeting with officials from the Office of New Drugs in December 2003, he told us that he might have indicated to the Director and Deputy Director that the agency was “tending” or “thinking of going” in the direction of a not- approvable decision, but that this was not the final decision. Furthermore, although he told us that he was “90 percent sure” as early as January 2004, that the decision would be not-approvable, the Acting Director told us he made his final decision only in the last few weeks prior to issuing the action letter, after he had reviewed all of the documentation associated with the application. The Acting Director of CDER told us that the rationale for his decision was not fully developed until a few days before the action letter was issued on May 6, 2004. According to internal FDA e-mails we reviewed, the Acting Director of CDER contacted the Director of the Office of Pediatric Therapeutics on May 2, 2004, requesting assistance on language regarding cognitive development during early adolescence to support his decision. According to these e-mails, the Director of the Office of Pediatric Therapeutics responded that she would consult with another official with a background in developmental pediatrics and would follow up with “behavioral science information as to why one cannot extrapolate decision making on safety issues” from older to younger adolescents. The rationale for the Acting Director of CDER’s decision was novel and did not follow FDA’s traditional practices. The Acting Director was concerned about the potential impact that the OTC marketing of Plan B would have on the propensity for younger adolescents to engage in unsafe sexual behaviors because of their lack of cognitive maturity. The Acting Director further concluded that because these differences in cognitive development made it inappropriate to extrapolate data from older to younger adolescents in this case, there was insufficient data on the use of Plan B among younger adolescents. FDA review officials disagreed with the Acting Director’s rationale and noted that the agency had not considered behavioral implications resulting from differences in cognitive development in prior OTC switch decisions. The Acting Director’s Rationale Was Based on His Concerns about Risk-Taking in Younger Adolescents The Acting Director of CDER told us he signed the not-approvable letter because of his concerns about the lack of cognitive development and the potential for risky behaviors among younger adolescents resulting from increased access to Plan B. For example, he noted increased access to Plan B could potentially result in an increase in unsafe sexual activity, particularly among younger adolescents—an age group, he noted, that has a tendency to engage in risky behaviors because of their level of cognitive development. This change in behavior could be represented by changes in measurable indicators, such as a decrease in condom use or an increase in the transmission of sexually transmitted diseases (STD). “In making decisions about pediatric use, it is often possible to extrapolate data from one age group to another, based on knowledge of the similarity of the condition. However, in this case, adolescence is known to be a time of rapid and profound physical and emotional change. . . . Because of these large developmental differences, I believe that it is very difficult to extrapolate data on behavior from older ages to younger ages. I am uncomfortable with our current level of knowledge about the potential differential impact of OTC availability of Plan B on these age subsets.” Some other officials we spoke with supported the Acting Director’s concerns about extrapolating data from older to younger adolescents. For example, the Director of the Office of Pediatric Therapeutics told us and noted in e-mails to the Acting Director of CDER, which we reviewed, that the difference in cognitive development and maturity between older and younger adolescents and the potential impact this would have on behaviors warranted a separate analysis of this latter age group. In addition, one of the members of the joint advisory committee we spoke with said he was also concerned about extrapolating data from older to younger age groups because he perceived weaknesses in the actual use and label comprehension studies submitted by the sponsor. Because of these concerns, the Acting Director concluded that the Plan B OTC switch application needed more data specific to younger adolescents. In the not-approvable letter, the Acting Director stated there were too few younger adolescents in the sponsor’s actual use study to support the Plan B OTC switch application. Specifically, he highlighted that only 29 of 585 participants in the study were 14 years to 16 years of age and none were under 14 years of age. Although he acknowledged concerns about the difficulty of including younger adolescents in actual use studies, he told us that it was not impossible to enroll younger adolescents in studies, noting that studies for other products have been conducted involving younger participants, including those as young as infants. Some of the Acting Director’s concerns regarding the low number of younger adolescents were also raised by other review staff and members of the joint advisory committee. For example, one FDA reviewer who recommended an approvable action on the Plan B OTC switch application noted that despite a reanalysis of the actual use study data of subjects aged 14 years to 17 years, the sample size was too small and “significantly limit assessment of potential risky/unsafe sexual behavior associated with OTC accessibility of Plan B.” Although review staff within the Offices of Drug Evaluation III and V presented him with additional data on sexual behaviors of younger adolescents in association with increased access to ECPs, the Acting Director of CDER determined that these data were not adequate to support the approval of Plan B for OTC use. He provided his reasoning in his memorandum, stating that these studies were either “not conducted in the general population or they provide product education assistance beyond what adolescents would receive in an OTC situation, where no contact with a health care professional is expected.” The Acting Director of CDER’s rationale varied from FDA’s traditional practices by considering the potential implications OTC access of Plan B would have on the sexual behavior of younger adolescents based on their lack of cognitive maturity and by not accepting the validity of extrapolating data from older to younger adolescents. Although he acknowledged to us that considering adolescents’ cognitive development as a rationale for a not-approvable decision was unprecedented, the Acting Director also told us that FDA had recently increased its focus on pediatric issues. He noted that pediatric issues were currently being raised in prescription drug reviews and believed the same should occur in OTC drug reviews. FDA Review Officials Disagreed with the Acting Director’s Rationale for the Not-Approvable Decision FDA review staff, the Directors of the Offices of Drug Evaluation III and V, and the Director of the Office of New Drugs disagreed with the Acting Director of CDER’s rationale for not approving the Plan B OTC switch application. FDA review officials, including those from the Office of New Drugs, noted that traditionally FDA has not considered whether younger adolescents would use an OTC product differently than older adolescents, and the Director of the Office of New Drugs told us that it was “atypical” to raise the question of maturity during a drug review. These officials also noted that FDA does not attempt to determine how a patient arrived at the need for a drug. Rather, drug evaluations usually begin with the need for a potential treatment already existing. Review staff we spoke with acknowledged that certain behavioral concerns and unintended consequences are examined for an OTC switch application, such as whether making a drug OTC would delay a person from seeking medical treatment or if the drug would potentially be abused if it were more readily available. They told us that these issues are usually examined during a benefit–risk review, which is an analysis of potential medical outcomes. Review staff told us they examined benefit–risk issues for Plan B, and they concluded that concerns regarding the potential for unsafe sexual behaviors among adolescents could not be supported. In addition, the review of the label comprehension study from the Office of Drug Safety noted that potential users of the product would be able to appropriately use it if the sponsor made its suggested changes to the proposed labeling. Also, at the public meeting, members of the joint advisory committee voted 27 to 1 that the actual use study demonstrated that consumers could properly use Plan B as recommended by the label. The members of the joint advisory committee also voted 28 to 0 that the literature review of Plan B included in the actual use study did not show that Plan B would be used as a regular form of contraception. Furthermore, the review of the application from the Office of Drug Evaluation III, which included the benefit–risk assessment for Plan B, noted that having Plan B in an OTC setting would “pose little risk” to the potential user and that the risk of an adverse pregnancy outcome, such as lower birth weight babies and premature delivery, is much higher among younger adolescents. The review concluded that OTC access to Plan B in helping younger adolescents avoid unintended pregnancies would be “of particular value given the greater risk of an adverse pregnancy outcome in this high risk group.” This review also noted that even for a large dose of the hormone used in Plan B, the “margin of safety appear to be high.” In an attempt to further address the Commissioner’s and Acting Director’s concerns about the potential for increased risky behavior by younger adolescents resulting from increased access to Plan B, review staff requested additional data from the sponsor and reviewed ongoing studies examining these concerns. FDA’s reviewers concluded that increased access to ECPs did not result in (1) inappropriate use by adolescents as a substitute form of contraception, (2) an increase in the number of sexual partners or the frequency of unprotected intercourse, or (3) an increase in the frequency of STDs. To reach these conclusions, review staff examined the five studies that provided supplies of ECPs in advance to study participants to assess the behavioral impact of OTC access. In one study, which included 2,090 women aged 15 years to 24 years, there was a decrease in unprotected sex among all age groups and no increase in the incidence of STDs compared to the baseline. Another study of 160 adolescent mothers included participants aged 14 years to 20 years. Although there were limited data available, this study concluded that there was no increase in unprotected intercourse and no decrease in condom use among participants. A third study of 301 adolescent women, aged 15 years to 20 years, showed similar results, with no increase in unprotected intercourse or STDs and no decrease in condom use. FDA officials, including those from the Office of New Drugs, also disagreed with the Acting Director’s determination that extrapolating data from older populations to younger adolescents was inappropriate. In their reviews, officials noted that data they reviewed showed that younger adolescents had outcomes similar to those of older populations. For example, the actual use study found that 82 percent of participants 16 years of age or under correctly took the second dose 12 hours later, compared to 78 percent of those 17 years and older. Also, review staff said that overall the number of participants who were younger adolescents was adequate to draw conclusions about potential use among the adolescent population. Review staff told us they encouraged the sponsor to not limit enrollment or exclude adolescents from the actual use study and felt the study included a representative population of women that would potentially use Plan B. Some of the members of the joint advisory committee we spoke with also said they considered the number of younger adolescents in the actual use study as adequate. In addition, the Director of the Office of New Drugs told us that the agency has not requested age-specific data often and that FDA often extrapolates findings, including findings on behaviors, from adults to adolescents. He added that given the agency’s traditional processes and the data provided in the Plan B OTC switch application, there was no reason to consider the extrapolations done in the staff’s reviews as inappropriate. “In my opinion, these studies provide adequate evidence that women of childbearing potential can use Plan B safely, effectively, and appropriately for emergency contraception in the non-prescription setting. The data submitted by the sponsor in support of non- prescription use of Plan B are fully consistent with the Agency’s usual standards for meeting the criteria for determining that a product is appropriate for such use. . . . Such a conclusion is consistent with how the Agency has made determinations for other OTC products, including other forms of contraception available without a prescription. Further, I believe that greater access to this drug will have a significant positive impact on the public health by reducing the number of unplanned pregnancies and the number of abortions.” In his memorandum, the Director of the Office of New Drugs also noted that FDA has a “long history” of extrapolating findings from older populations to younger adolescents. He wrote that this type of extrapolation from older populations to younger adolescents had been done in clinical trials for both prescription and OTC drug approvals and that this practice was incorporated into the Pediatric Research Equity Act (PREA)—the law authorizing FDA to require pediatric studies in certain defined circumstances. According to PREA, if the disease and the effects of the drug are “sufficiently similar” between adult and pediatric populations, it can be concluded that the effectiveness can be extrapolated from “adequate and well-controlled studies in adults” usually in conjunction with supplemental studies in pediatric populations. In addition, PREA provides that studies may not be necessary for all pediatric age groups, if data from one age group can be extrapolated to another. Members of the joint advisory committee expressed similar conclusions to those of FDA review officials earlier at the public meeting in December 2003. During the public meeting, committee members voted 27 to 1 that the actual use study data were generalizable to the overall population of OTC users, including adolescents. The decision to not approve the Plan B OTC switch application was not typical of the other 67 proposed prescription-to-OTC switch decisions made from 1994 through 2004. The decision of the Plan B application stands out from these other OTC switch applications for two reasons: it was the only decision that was not approved after the members of the joint advisory committee voted to recommend approval of the application, and the action letter was signed by the Acting Director of CDER instead of the directors of the offices where the application was reviewed. From 1994 through 2004, Plan B was the only prescription-to-OTC switch decision that was not approved after the joint advisory committee voted to recommend approval of the application. FDA advisory committees considered 23 OTC switch applications during this period; the Plan B OTC switch application was the only 1 of those 23 that was not approved after the joint advisory committee voted to recommend approval of the application. In addition, there has been only 1 other decision for an OTC switch application that did not follow the recommendations of the joint advisory committee. This other OTC switch application, for the drug Aleve, was approved for OTC status by FDA in 1994, although the joint advisory committee opposed the switch. The NDAC met jointly with the Arthritis Drugs Advisory Committee to discuss the OTC switch application for Aleve in June 1993 and recommended that the application not be approved. Following this meeting, the sponsor made changes to address the joint advisory committee’s concerns, and as a result of these changes, FDA decided to approve the application. From 1994 through 2004, 94 action letters were issued during the review processes for the 68 prescription-to-OTC switch applications, and only 1 action letter—the not-approvable letter for Plan B—was signed by the Director, in this case the Acting Director, of CDER. Given that Plan B was a first-in-a-class drug, the Directors of the Offices of Drug Evaluation III and V would normally jointly sign the action letter. The Plan B application was 1 of 68 proposed OTC switch applications decided by FDA from 1994 through 2004, and 14 of those 68 applications, including the Plan B application, were issued not-approvable letters. Eight of those 14 applications were eventually approved. Plan B was the only contraceptive or emergency contraceptive proposed for an OTC switch during this period. Thirty-eight OTC switch applications, including Plan B, were for the same dose, population, and indication, and all but 3 applications were eventually approved. According to the Deputy Director of the Office of New Drugs, there are no age-related marketing restrictions for any FDA-approved contraceptives, and FDA has not required any pediatric studies. Condoms and spermicides are available to anyone OTC, while intrauterine devices; diaphragms; cervical caps; and hormonal methods of contraception, including ECPs, are available to anyone with a prescription. For hormonal contraceptives, FDA has assumed that suppression of ovulation is the same in all postmenarcheal females, regardless of age. The Deputy Director of the Office of New Drugs told us that all birth control pills, including ECPs, contain the following class labeling: “Safety and effectiveness of [trade name] have been established in women of reproductive age. Safety and efficacy are expected to be the same for postpubertal adolescents under the age of 16 and for users 16 years and older. Use of this product before menarche is not indicated.” FDA officials from the Office of New Drugs explained that for an OTC switch, the safety and effectiveness issues have already been addressed during the initial approval process for the drug to become a prescription drug. For an OTC switch application, the review process is primarily focused on whether the drug meets the OTC switch criteria, specifically whether it is safe and effective for use in self-medicating. There were no safety issues that would require age-related restrictions that were identified with the original NDA for prescription Plan B. FDA approved this application upon determining that Plan B met the statutory standards of safety and effectiveness, manufacturing and controls, and labeling. The original NDA for Plan B for use as an emergency contraceptive contained an extensive safety database that included controlled trials and literature on over 15,000 women. The label for prescription Plan B makes no age distinctions about the pharmacological processes of the drug, and prescription Plan B is available to anyone with a prescription. FDA reviewed a draft of this report and provided comments, which are reprinted in appendix VI. FDA also provided technical comments, which we incorporated as appropriate. In its comments, FDA disagreed with our finding that three aspects of its decision process for the May 2004, Plan B OTC switch application were unusual. First, FDA said that the involvement of high-level management in the Plan B decision was not as unusual as the draft report found. FDA commented that the Director of CDER is ultimately responsible for all decisions made within CDER, and that the Director of CDER is regularly involved in regulatory decisions that are not routine, including those that involve controversial issues. FDA also commented that the Director of CDER typically discusses high-profile and controversial regulatory decisions with officials within the Office of the Commissioner. While we agree with FDA that the Director of CDER and other high-level officials generally are more likely to become directly involved in high- profile regulatory decisions and noted that in the draft of the report, we found that this level of involvement is unusual for OTC switch applications. The other examples of high-level management involvement given to us by FDA officials during the course of our work involved decisions about the marketing of prescription drugs. Also, it was unusual for the Acting Director of CDER to inform FDA’s review staff that it had been determined that the Plan B decision would be made by high-level management. The Acting Director did so on January 15, 2004, before the review staff had completed their reviews of the application. Second, FDA took issue with what it characterized as the tone of our discussion about when the decision was made to deny the Plan B OTC switch application. FDA commented that discussions about alternative regulatory actions ordinarily occur in the course of decision making within CDER and that it is inaccurate to conclude that a decision to deny the application was made several months before the not-approvable letter was issued. However, the draft report did not assert that a decision was actually made several months before the letter was issued. Rather, it accurately noted that FDA officials gave us conflicting accounts of when the not-approvable decision was made. The Director and Deputy Director of the Office of New Drugs and other officials told us that they were informed during December 2003 and January 2004 that the application would not be approved. The Acting Director of CDER denied this, and we reported that his rationale for the not-approvable decision was not fully developed until early May 2004. Third, FDA disagreed with our finding that the Acting Director’s rationale for denying the application was novel and did not follow FDA’s traditional practices. FDA commented that the Acting Director’s focus on the potential implications to the sexual behavior of adolescent women of approving the Plan B OTC switch application was appropriate and consistent with FDA’s treatment of other OTC switch applications. In response to this comment, we have revised the report to more clearly describe the reasons for our finding. We found that the Acting Director’s rationale was novel because it explicitly considered the differing levels of cognitive maturity of adolescents of different ages, and that because of the Acting Director’s views about these cognitive maturity differences, he concluded that it was inappropriate to extrapolate data related to risky sexual behavior from older to younger adolescents. In his May 6, 2004, memorandum, the Acting Director stated that “Because of these large developmental differences, I believe that it is very difficult to extrapolate data on behavior from older to younger ages.” The Acting Director acknowledged that considering adolescents’ cognitive development as a rationale for a not-approvable decision was unprecedented for an OTC switch application. In addition, other FDA officials told us that the agency had not previously considered whether younger adolescents would use a product differently than older adolescents. For example, the Director of the Office of New Drugs told us that it was “atypical” to raise the question of maturity during a drug review and that FDA has traditionally extrapolated findings from older to younger adolescents. Furthermore, in his April 22, 2004, memorandum, the Director of the Office of New Drugs said that “the Agency has a long history of extrapolating findings from clinical trials in older patients to adolescents in both prescription and non- prescription approvals.” In addition, FDA disagreed with our statement in the draft report that the Directors of the Offices of Drug Evaluation III and V and the Director of the Office of New Drugs refused to sign the not-approvable letter. We used the term “refused” in the draft report because, in our interviews with them, all three of the directors told us that they did not agree with the not- approvable decision and did not sign the action letter, and one of the directors told us that she had been given an opportunity to sign the letter and refused to do so. However, in its comments, FDA said that the directors were not asked to sign the action letter because it was known that they disagreed with the Acting Director’s decision. We have revised the report to reflect this. In its technical comments, FDA asked us to emphasize that safety concerns regarding OTC use of drug would not be raised for prescription products because of the involvement of health practitioners. The draft report noted that prescription drugs are drugs that are safe for use only under supervision of a health care practitioner and that approved prescription drugs that no longer require such supervision may be marketed OTC. We are sending copies of this report to the Acting Commissioner of the Food and Drug Administration and other interested parties. We will also provide copies to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7119 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. To examine how the decision was made to not approve the switch of Plan B from prescription to over-the-counter (OTC), we reviewed documents, such as the Plan B OTC switch action package related to the May 6, 2004, decision from the Food and Drug Administration (FDA). We examined documents produced by FDA, including official meeting minutes and the reviews of the Plan B OTC switch application from the Offices of Drug Evaluation III and V and the Office of New Drugs, related to the review of the Plan B OTC switch application. FDA officials told us that documentation was not available concerning some communications within FDA. It was not possible to determine whether such communications may have concerned the Plan B OTC switch application. However, we acquired sufficient information from other FDA documents and our interviews with FDA officials to fully address our objectives. We interviewed FDA officials involved in the Plan B OTC switch application review, including officials from the Office of Drug Evaluation III, Office of Drug Evaluation V, Office of New Drugs, and Office of Drug Safety. We also interviewed the Acting Director of the Center for Drug Evaluation and Research (CDER), the Acting Deputy Commissioner for Operations, and the Director of the Office of Women’s Health. We interviewed members of FDA’s advisory committees that met jointly to discuss the Plan B OTC switch application—the Nonprescription Drugs Advisory Committee (NDAC) and the Advisory Committee for Reproductive Health Drugs (ACRHD)—and reviewed the transcripts of the meeting. In addition, we interviewed officials from Barr Pharmaceuticals, Inc., the company currently sponsoring the Plan B application for the prescription-to-OTC switch, and Women’s Capital Corporation (WCC), the original sponsor of the Plan B OTC switch application. To examine how the Plan B decision compares to the decisions for other proposed prescription-to-OTC switches made from 1994 through 2004, we examined the recommendations of the joint advisory committee and if they were followed for Plan B and the proposed OTC switch drugs that were decided from 1994 through 2004. We reviewed action letters and interviewed FDA officials and review staff as well as other outside experts involved with the Plan B OTC switch application. We also interviewed officials from the Consumer Healthcare Products Association (the association representing OTC drug manufacturers) about the prescription- to-OTC switch process. To determine if there were age-related marketing restrictions for prescription Plan B and other prescription and OTC contraceptives, we reviewed FDA documents and interviewed FDA officials and review staff regarding safety concerns for prescription Plan B and the safety concerns for other prescription and OTC contraceptives. We also interviewed representatives from the American College of Obstetricians and Gynecologists, the American Academy of Pediatrics, Concerned Women for America, and the Planned Parenthood Federation of America, Inc., regarding safety concerns for Plan B and other contraceptives. When the source of evidence we cited is from an interview, we identified the respondent’s title and FDA office. Whenever possible, we reviewed documents to verify testimonial evidence from FDA officials. When this was not possible, we attempted to corroborate testimonial evidence by interviewing multiple people about the information we obtained. In situations where there was no concurrence among the interviewees, we presented all the information provided. Minutes of the internal FDA meetings discussed in this report were written either by a staff member within the Office of Drug Evaluation III or by the Executive Secretariat within the Office of the Commissioner. For meeting minutes written by the office staff member, attendees either reviewed or concurred with the minutes and documented this by including their names at the end of the minutes. For summaries written by the Executive Secretariat, there was no documentation of a review or of concurrence by attendees included with these summaries. FDA officials told us that summaries from meetings within the Office of the Commissioner were not reviewed or concurred with by attendees. To verify data we received from FDA regarding proposed prescription-to- OTC switch decisions made from 1994 through 2004 and the outcomes of advisory committee meetings for these drugs, we compared FDA’s data with prescription-to-OTC switch data obtained from the Consumer Healthcare Products Association on OTC drug switches. Our work examined only events and communications within FDA and between FDA and the Plan B sponsors; we did not consider any communications that may have occurred between FDA officials and other executive agencies. Our work examined only FDA’s actions prior to the May 6, 2004, not-approvable letter, and we did not examine any aspects of FDA’s subsequent deliberations about Plan B. We conducted our work from September 2004 through November 2005 in accordance with generally accepted government auditing standards. A notice in the Federal Register stated that the FDA Commissioner had concluded that certain combined oral contraceptives are safe and effective for use as emergency contraception and requested submission of a new drug application (NDA) for this use. FDA approved Plan B as a prescription form of emergency contraception. A citizens’ petition for direct over-the-counter (OTC) access to Plan B was filed, requesting that FDA grant Plan B OTC status. FDA review staff within the Office of Drug Evaluation III sent Women’s Capital Corporation (WCC) a letter, denying its proposal that FDA request that it conduct pediatric studies on the use of prescription Plan B as an emergency contraceptive in exchange for extending the drug’s marketing exclusivity for 6 months, as permitted under the Federal Food, Drug, and Cosmetic Act. According to the letter to WCC and a memorandum by review staff within the Office of Drug Evaluation III, the proposed studies would have included a pharmacokinetic study and a safety study and would have used Plan B as an emergency contraceptive in subjects as young as 12 years of age. According to review staff within the Office of Drug Evaluation III, once a young female reached menarche, she was considered an adult for contraceptives and the condition for using an emergency contraceptive is not unique to the pediatric population. The letter concluded that trials could be conducted in the adult population and then extrapolated to the pediatric population. A Center Director Informational Briefing was held in response to the citizens’ petition, filed on February 14, 2001. Meeting attendees included the Center for Drug Evaluation and Research (CDER) Director and Deputy Director, the Director of Office of New Drugs, and review staff from the Offices of Drug Evaluation III and V. A briefing for the Office of the Commissioner was held to discuss the expected application to switch Plan B to OTC. Attendees included the Deputy Commissioner, the agency’s Chief Counsel, the then Director of CDER, the Director of the Office of New Drugs, and review staff from the Offices of Drug Evaluation III and V. According to the executive summary of the briefing, issues discussed included (1) the political sensitivity of the application, (2) consumer understanding of the proposed nonprescription product label, (3) the results of actual use studies to adequately address safety issues, (4) the review status of the supplemental new drug application (sNDA) upon submission, and (5) regulatory issues. The Director of CDER provided the Deputy Commissioner and FDA’s Chief Counsel with materials on the safety of emergency contraception and its mechanism of action, which were requested at the June 5, 2002, briefing. FDA officials within the Office of New Drugs and the Offices of Drug Evaluation III and V and the sponsor held a meeting in which FDA provided guidance on the Plan B OTC switch application, which was to be submitted. According to meeting minutes, agency officials and the sponsor discussed behavioral issues in adolescents and the possibility of a behind-the-counter option or a possible age restriction. WCC submitted an sNDA to FDA to allow Plan B to be sold OTC. FDA review staff from the Office of Drug Evaluation III determined that the sNDA was fileable and accepted it for review. FDA set a Prescription Drug User Fee Act (PDUFA) goal date of February 22, 2004, to reach a decision on the application. A teleconference was held between review staff within Offices of Drug Evaluation III and V and the sponsor. According to minutes of this teleconference, review staff began working with the sponsor to prepare for the meeting of the joint advisory committee in December. Minutes also noted that FDA review staff suggested that the sponsor plan to address issues of age, literacy, or label comprehension regarding the administration of Plan B. Review within the Office of Drug Evaluation V requested additional information on the label comprehension study results from WCC. According to the official request, review staff asked for information including results for each question asked in the label comprehension study based on literacy levels; details on what criteria were used to determine if a communication objective was met; and other specific points of clarification on how responses were scored. A teleconference was held in which review staff within the Offices of Drug Evaluation III and V discussed the upcoming December 16, 2003, public meeting of its two advisory committees with WCC. According to teleconference minutes, review staff requested additional information on the labels used for the label comprehension and the actual use studies and on the label proposed for approval in the sNDA. Minutes also noted that WCC informed FDA that on September 23, 2003, a majority of its board voted to sell the marketing rights of Plan B to Barr Pharmaceuticals, Inc. Barr Pharmaceuticals, Inc., was finalizing the purchase of the marketing rights for Plan B from WCC and began to act as the agent for WCC for Plan B. At the request of Barr Pharmaceuticals, Inc., a teleconference was held to discuss the upcoming joint public meeting of FDA’s advisory committees. Meeting participants from FDA included review staff within the Offices of Drug Evaluation III and V. According to teleconference minutes, review staff asked Barr Pharmaceuticals, Inc., about possible age restrictions for use of Plan B. Minutes also noted that Barr Pharmaceuticals, Inc., said that it intended to offer its product to women as young as 15 years of age. Also, Barr Pharmaceuticals, Inc., agreed to explore and report back to FDA on behind-the-counter marketing and the implementation of age limitations on the sale of Plan B. A reviewer within the Office of Drug Safety completed her review of the Plan B label comprehension study, which was initially submitted to review staff within the Office of Drug Evaluation III. According to the official memorandum on the review of the label comprehension study, the reviewer concluded that making the proposed changes to the Plan B label would likely result in acceptable levels of comprehension. Review staff within the Office of Drug Evaluation V told GAO they concurred with the reviewer’s findings. A meeting was held between FDA officials within the Office of New Drugs and the Offices of Drug Evaluation III and V and the sponsor. According to meeting minutes, FDA officials informed Barr Pharmaceuticals, Inc., that the agency may not be able to present a clear regulatory path for alternate OTC distribution mechanisms for Plan B in time for the December 16, 2003, public meeting. A briefing for the Office of the Commissioner was held to discuss the upcoming public meeting of the Nonprescription Drugs Advisory Committee (NDAC) and Advisory Committee for Reproductive Health Drugs (ACRHD). FDA participants included the Commissioner, the Acting Director of CDER, the Director and Deputy Director of the Office of New Drugs, and review staff within the Office of Drug Safety and the Offices of Drug Evaluation III and V. According to the executive summary of the briefing, issues discussed included the sponsor’s marketing and distribution plan and the effect making Plan B available OTC might have on consumers’ behavior. At a joint meeting of the NDAC and the ACRHD, members voted 23 to 4 to recommend approving the switch of Plan B from prescription to OTC. The Director and the Deputy Director of the Office of New Drugs told GAO they were told by the Acting Deputy Commissioner for Operations and the Acting Director of CDER that the Plan B application could not be approved. These officials said they were told that this direction came from the Office of the Commissioner. The Acting Deputy Commissioner for Operations and the Acting Director of CDER told GAO they did not say this. A meeting was held between officials within the Office of the CDER Director and review staff within the Offices of Drug Evaluation III and V about the Office of the Commissioner’s position on the acceptability of the Plan B OTC switch application. According to meeting minutes, the Acting Director of CDER said that a not-approvable decision was recommended by the Office of the Commissioner based on the need for more data to more clearly establish appropriate use in younger adolescents, the need to develop a restricted distribution plan, or both. Meeting minutes also indicated that review staff also informed the Acting Director that their reviews were not yet completed and that there were additional data regarding adolescent use of Plan B. It was then agreed that review staff would complete their reviews and collect the additional data and present them to the Commissioner and the Acting Director of CDER some time in February. Review staff within both Offices of Drug Evaluation III and V later noted in their completed reviews of the Plan B OTC switch application that they were told at this meeting that the decision on the Plan B application would be made at a level higher than the offices of drug evaluation. A teleconference was held between review staff from the Office of Drug Evaluation V and the sponsor. According to meeting minutes, review staff informed the sponsor that a meeting was held with CDER management, including the Acting Director of CDER and the Director and Deputy Director of the Office of New Drugs, in which “some issues” were raised that would require review staff to “provide additional information and have additional discussions with CDER upper management.” Minutes also noted that review staff told the sponsor they would not be discussing labeling revisions at that time and that they had been instructed by CDER management to complete their written reviews regarding the OTC switch application. A memorandum from the Director of Office of Drug Evaluation V indicated that she was in agreement with the favorable assessment of review staff and the majority votes by members of the joint advisory committee. Her memorandum concluded that adequate data had been submitted to approve Plan B for OTC marketing with certain product-labeling modifications—such as strengthening the message that Plan B is not for regular contraceptive use—included to address concerns raised at the public meeting and in the agency’s reviews. A meeting was held between FDA officials within the Office of New Drugs and the Offices of Drug Evaluation III and V and Barr Pharmaceuticals, Inc./WCC. According to meeting minutes, FDA officials told the sponsor that the decision on the application would be made at a level higher than the Offices of Drug Evaluation. The Director of the Office of New Drugs told the sponsor that such a high-level decision was not typical of CDER’s procedures for drug approvals. The minutes also noted that review staff within the Offices of Drug Evaluation were in the process of completing their reviews and would forward them with their final recommendations to high-level management. Meeting minutes also indicated that FDA officials told the sponsor that they would need to request a meeting directly with the Office of the Center Director or the Office of New Drugs to understand high-level management’s concerns. In addition, meeting minutes noted that FDA officials told the sponsor that the Office of the Commissioner and the Acting Director of CDER had raised concerns as to whether there were adequate data to establish that minors (i.e., those under 18 years of age) would use Plan B appropriately in the absence of a learned intermediary. Potential options that were suggested from FDA and CDER management included the possible need to (1) collect additional data, perhaps from another actual use study targeted to minors, or (2) to impose an age restriction on the OTC sale of the product. Review staff within the Office of Drug Evaluation III requested that the sponsor reanalyze the adolescent data of the Plan B actual use study. According to the official request, staff asked for a “ummary presentation of the Actual Use data from the participants in the less than 18 years of age subset, including comparisons to the older subset within the study.” FDA confirmed that it had extended the PDUFA goal date for a decision on the Plan B OTC switch application for 90 days due to the submission of the requested adolescent data from the actual use study by the sponsor. The extended PDUFA goal date was May 21, 2004. A briefing was held during which review staff within Offices of Drug Evaluation III and V presented their analysis of additional summary data to the Commissioner on the use and behavior of adolescents in association with increased access to emergency contraceptive pills. Other attendees included the Acting Deputy Commissioner for Operations and the Acting Director of CDER. According to meeting minutes, included in the presentation were the review staff’s recommendations that Plan B have an OTC marketing status without restriction. The meeting minutes also noted that the Commissioner raised concerns regarding adolescents, including the potential for changes in future contraceptive behaviors and the potential benefits of counseling from a learned intermediary for younger adolescents. In addition, the meeting minutes noted that CDER was directed by the Commissioner to work with the sponsor on a marketing plan to limit the availability of Plan B in an OTC setting and to consider the most appropriate ages that should have OTC access restricted. The Commissioner requested a “rapid action” on the application. Review staff within the Offices of Drug Evaluation III and V met with the Acting Deputy Commissioner for Operations, the Acting Director of CDER, and the Director and the Deputy Director of the Office of New Drugs. According to a reviewer’s memorandum, in part, during this meeting, the Acting Deputy Commissioner for Operations expressed her and the Commissioner’s concerns regarding adolescents and the potential for adverse behaviors resulting from increased access to Plan B. The Acting Director of CDER concurred with these concerns. This was the original PDUFA goal date for the initial Plan B OTC switch application. Barr Pharmaceuticals, Inc., completed acquisition of the marketing rights for Plan B from WCC. Barr Pharmaceuticals, Inc., submitted an amendment to its sNDA, proposing a dual-marketing strategy, making Plan B OTC for women 16 years of age and older and prescription only for women under 16 years of age. The Deputy Director of the Office of Drug Evaluation III completed her review of the Plan B OTC switch application and recommended that Plan B be approved for use as an emergency contraceptive in the OTC setting without age restriction. The review concluded there were sufficient data on the safety and effectiveness of Plan B to approve its use in the OTC setting. The Director of the Office of New Drugs issued his review of the Plan B application and concurred with the recommendations of the offices of drug evaluation that the sponsor had provided adequate data to demonstrate that Plan B could be safely, effectively, and appropriately used by women of childbearing potential for the indication of emergency contraception without a prescription. He recommended that this application be approved to permit availability of Plan B without a prescription and without age restriction. The Acting Director of CDER contacted the Director of the Office of Pediatric Therapeutics, within the Office of the Commissioner, via e-mail requesting assistance on language regarding cognitive development among adolescents. According to internal FDA e-mails, the Director of the Office of Pediatric Therapeutics responded that she would consult with another official with a background in developmental pediatrics and would follow up with “behavioral science information as to why one cannot extrapolate decision making on safety issues” from older populations to younger adolescents. According to internal FDA e-mails, the Director of the Office of Pediatric Therapeutics provided the Acting Director of CDER with information on brain development and the maturation of higher-order thinking among adolescents 10 years to 21 years of age. In her e-mail to the Acting Director, the Director of the Office of Pediatric Therapeutics included the statement that “uring early adolescence (10-13) there is an emergence of impulsive behavior without the cognitive ability to understand the etiology of their behavior.” According to teleconference minutes, the Acting Director of CDER called Barr Pharmaceuticals, Inc., officials to inform them of the not-approvable action and asked permission to release the not- approvable letter. According to FDA regulations, without consent of the sponsor, the agency cannot publicly release data or information contained in an application before an approval letter is issued. Minutes noted that the Acting Director told sponsor officials that (with their permission) he would conduct a press interview to discuss the not-approvable action and the staff’s disagreement with the not-approvable action would be acknowledged publicly. FDA issued a not-approvable letter, denying Plan B OTC marketing status, citing a lack of adequate data regarding safe use among younger adolescents. The letter also stated that FDA was not able to conduct a complete review of the dual-marketing strategy in the amendment to the sNDA because of the absence of the draft product labeling describing how Barr Pharmaceuticals, Inc., would comply with both the prescription and OTC labeling requirements in a single package. In addition to the contact named above, Martin T. Gahart, Assistant Director; Cathleen Hamann; Julian Klazkin; Gay Hee Lee; and Deborah J. Miller made key contributions to this report.
In April 2003, Women's Capital Corporation submitted an application to the Food and Drug Administration (FDA) requesting the marketing status of its emergency contraceptive pill(ECP), Plan B, be switched from prescription to over-the-counter (OTC). ECPs can be used to prevent an unintended pregnancy when contraception fails or after unprotected intercourse, including cases of sexual assault. In May 2004, the Acting Director for the Center for Drug Evaluation and Research (CDER) issued a "not-approvable" letter for the switch application, citing safety concerns about the use of Plan B in women under 16 years of age without the supervision of a health care practitioner. Because the not-approvable decision for the Plan B OTC switch application was contrary to the recommendations of FDA's joint advisory committee and FDA review staff, questions were raised about FDA's process for arriving at this decision. GAO was asked to examine (1) how the decision was made to not approve the switch of Plan B from prescription to OTC, (2) how the Plan B decision compares to the decisions for other proposed prescription-to-OTC switches from 1994 through 2004, and (3) whether there are age-related marketing restrictions for prescription Plan B and other prescription and OTC contraceptives. To conduct this review, GAO examined FDA's actions prior to the May 6, 2004, not-approvable letter for the initial application. On May 6, 2004, the Acting Director of CDER rejected the recommendations of FDA's joint advisory committee and FDA review officials by signing the not-approvable letter for the Plan B switch application. While FDA followed its general procedures for considering the application, four aspects of FDA's review process were unusual. First, the directors of the offices that reviewed the application, who would normally have been responsible for signing the Plan B action letter, disagreed with the decision and did not sign the not-approvable letter for Plan B. The Director of the Office of New Drugs also disagreed and did not sign the letter. Second, FDA's high-level management was more involved in the review of Plan B than in those of other OTC switch applications. Third, there are conflicting accounts of whether the decision to not approve the application was made before the reviews were completed. Fourth, the rationale for the Acting Director's decision was novel and did not follow FDA's traditional practices. The Acting Director stated that he was concerned about the potential behavioral implications for younger adolescents of marketing Plan B OTC because of their level of cognitive development and that it was invalid to extrapolate data from older to younger adolescents. FDA review officials noted that the agency has not considered behavioral implications due to differences in cognitive development in prior OTC switch decisions and that the agency previously has considered it scientifically appropriate to extrapolate data from older to younger adolescents. The Plan B decision was not typical of the other 67 proposed prescription-to-OTC switch decisions made by FDA from 1994 through 2004. The Plan B OTC switch application was the only one during this period that was not approved after the advisory committees recommended approval. The Plan B action letter was the only one signed by someone other than the officials who would normally sign the letter. Further, there are no age-related marketing restrictions for any prescription or OTC contraceptives that FDA has approved, and FDA has not required pediatric studies for them. FDA identified no issues that would require age-related restrictions in the review of the original prescription Plan B new drug application. In its comments on a draft of this report, FDA disagreed with GAO's finding that high-level management was more involved with the Plan B OTC switch application than usual, with GAO's discussion about when the not-approvable decision was made, and with GAO's finding that the Acting Director of CDER's rationale for denying the application was novel. However, GAO found that high-level management's involvement for the Plan B decision was unusual for an OTC switch application and FDA officials gave GAO conflicting accounts about when they believed the decision was made. The Acting Director acknowledged to GAO that considering adolescents' cognitive development as a rationale for a not-approvable decision was unprecedented for an OTC application, and other FDA officials told GAO that the rationale differed from FDA's traditional practices.
Military commissaries have existed for many years and provide a nonpay benefit to U.S. military personnel. They sell tax-free food and household items at cost plus a 5-percent surcharge. DeCA is responsible for operating DOD’s worldwide commissary system. It was established in October 1991 to consolidate the four separate commissary systems then operated by the military services. Headquartered at Fort Lee, Virginia, DeCA is organized into 7 regions and employs about 18,000 people. It operates a system of 309 stores—209 in the continental United States and 100 overseas. According to DeCA, it is the ninth largest food retailer in the United States and has total annual sales of approximately $5 billion with U.S. stores generating about 75 percent of the sales. The eligible commissary customer base totals over 11 million people consisting of about 3.8 million active duty personnel and their family members; about 2.3 million Selected Reservists and their spouses and children; and almost 4.9 million retired military personnel, which includes their spouses and children. In addition, the customer base includes a number of “Gray Area” retirees, some members of the Individual Ready Reserve (IRR), Medal of Honor recipients, veterans with a 100-percent disability, and civil servants (including diplomats) and their dependents overseas. The active duty community has unlimited access privileges to the commissaries as do the other members of the customer base. Reservists, on the other hand, are (1) authorized 12 visits per year if they have earned a minimum number of points creditable for retirement and (2) access during any period of active duty service. According to DeCA, military retirees account for almost half of its customer patronage. DeCA employees in the continental United States are not authorized to shop in commissaries. Prior to 1986, Selected Reservists were allowed commissary privileges only during periods of active duty—normally 14 days during each year—but not during weekend drills. In 1986, Selected Reservists were authorized to use the commissary once for each day of active duty training up to a maximum of 14 times. These commissary visits could be used anytime during the 1-year period following the completion date of active duty training. The current reservist access policy was established in 1990. That year, legislation set access for all members of the Ready Reserve—Selected Reservists and certain members of the IRR—who earn at least 50 retirement points per year the right to a minimum of 12 commissary visits per year. “Gray Area” retirees were also authorized 12 visits each calendar year. Reservists entitled to commissary access are issued a special card that is punched each time a purchase is made at a commissary. Since 1993, DOD has proposed changing commissary policy three times to allow reservists unlimited access; however, neither DOD nor DeCA have performed any studies to examine or analyze the potential financial impact of these suggested policy changes. None of the proposals were adopted by Congress. The first was submitted for fiscal year 1994 and would have granted reservists, including “Gray Area” retirees, the same unlimited access as active duty personnel. For fiscal year 1996, DOD proposed that “Gray Area” retirees be granted permanent, unlimited commissary access and that the other reservists be provided unlimited access for a 1-year test period. For fiscal year 1997, DOD proposed a 1-year test of unlimited benefits for all reservists (including dependents) at one or more areas in the United States. No plans have been developed to conduct the field tests proposed in fiscal years 1996 and 1997. Officials in DOD and DeCA and representatives of industry and military associations hold varying opinions concerning unlimited commissary access for reservists; however, none of the organizations or offices we visited had performed any analysis or conducted any studies to support their particular views. DOD officials who are responsible for personnel and reserve affairs proposed the initiative and believed that the effects on appropriated funding levels would be minor or nonexistent. They stated that the proposed policy change was the “right thing to do,” and would not likely affect private sector grocers adversely. On the other hand, an official in the DOD Comptroller’s office expressed reservations about the policy change because it could potentially lead to the hiring of additional personnel, thereby increasing the need for appropriated funding. DeCA officials stated that they did not know what the financial impact would be and that the agency would carry out any policy established by DOD and Congress. Representatives of the Reserve Officers’ Association and the Military Coalition strongly support unlimited commissary access for reservists on the basis that this change would take away what they perceive as a “second class citizen” stigma for reservists when compared to the active military. These groups also said that the proposed change would not greatly affect private sector supermarkets. Representatives from the Food Marketing Institute, a food industry association and lobby group, told us that their organization did not support the proposed change, had lobbied strongly against it, and believed the food industry would lose customers and sales if reservists were given unlimited commissary access. The Congressional Budget Office is conducting an overall review of DOD’s resale operations, which include the exchange systems; commissaries; and morale, welfare and recreation activities. An initial report is expected in late 1996 or early 1997. This work is being performed for the House Committees on Budget and Government Reform and Oversight. Broad objectives include examining overall costs, identifying any hidden costs, and determining the worth or value of these activities to their patrons. Commissary funding is obtained from two primary sources—an annual appropriation and a surcharge added to each sale. For fiscal years 1992 to 1996, DeCA’s total obligations have averaged about $1.31 billion annually, about $1 billion (over 70 percent) in appropriated funds and $315 million from the surcharge. Because appropriated funds pay for DeCA’s personnel costs, there is a valid concern that the need for appropriated funds would increase if the customer base expanded and sales increased. Appendix II presents additional data on commissary funding. The largest funding source is the annual congressional appropriation. DeCA has received an average of about $1 billion annually for the 5 years of its existence. Appropriated funds are used primarily to cover two operating expenses: (1) labor costs (civilian employee salaries and personnel contracts) and (2) the transportation of U.S. goods to overseas stores. Appropriated fund support in 1996 totaled almost $879 million. Of this total, $595 million went toward labor costs, $149 million for transportation, and $135 million for nonpersonnel administrative and other expenses. Military commissaries are nonprofit organizations that sell merchandise at cost plus a 5-percent surcharge. The surcharge is placed into the revolving Surcharge Collections Fund from which DeCA has obligated, on average, about $315 million annually. The surcharge is used to pay for (1) facilities, maintenance, and operating supplies, such as paper bags and packaging materials; (2) construction of new commissaries and the renovation of older stores; and (3) some equipment, including data processing items. The actual financial impact, if any, of allowing reservists unlimited access would depend on the extent of additional commissary sales and increased costs generated as a result of programmatic changes DeCA might make to accommodate any increase in sales. These changes, such as additional hiring and store expansion or renovation, could take some time to occur and to have a measurable impact on operations. DOD proposed in fiscal year 1996 that a nationwide field test be conducted for a 1-year period to evaluate the impact of unlimited access by reservists. In fiscal year 1997, DOD suggested another test at one or more areas in the United States. Both suggestions would require temporarily granting access to commissaries to all reservists or, at least, those in a certain area or areas. The primary concern with testing unlimited accessibility is that withdrawal of the benefit later could have a negative impact on morale and on the perception of the benefits of military service. We believe DOD and DeCA could develop reliable estimates of the potential impact of this policy change using a study and data analysis approach and that such an approach would be more appropriate than a field test. Key elements of such a study are discussed in the following sections. The first step in analyzing the impact of opening the commissaries to reservists is to determine how many personnel in each category (e.g., “Gray Area” retirees, Selected Reservists, IRR, active duty, etc.) use the commissaries. This information could be developed by DeCA using recognized survey techniques. While DOD has conducted two surveys involving reservists in the past several years, neither has provided adequate or sufficient information regarding reservist usage. A 1992 DOD Reserve Components Survey was done to gain insight into the utilization of and satisfaction with military facilities by reserve officers, enlisted personnel, and their spouses. It did not attempt to estimate the overall commissary patronage level attributable to reservists or project their usage to the system as a whole. This survey disclosed that 39 percent of reservists reported that they used the commissary system; however, 61 percent said they did not. Sixty eight percent of all reservists surveyed cited distance from the commissary as a factor that limited their usage of the system while the policy restricting reservists’ access to commissaries was reported by 25 percent. A 1994 Commissary Patron Demographic Survey developed information to describe certain aspects of the typical commissary customer and estimated that reservists represented about 5 percent of commissary patronage. However, according to DeCA officials as well as our analysis, the survey results were not projectable to a systemwide perspective because of flaws in methodology. Specifically, (1) the distribution of commissaries selected for the survey was based on the distribution of the general population—not the population distribution of reservists in the United States—and (2) the survey was conducted during a holiday period that did not represent normal operations. To identify locations where increased patronage is most likely, DeCA needs to analyze the locations of commissaries in relation to concentrations of reservists and the distances to individual stores. Reservists are located throughout the United States, but, only 209 stores were available within the continental United States in 1996. This means that many reservists are not located close to a store and, therefore, may not be able to use the system on a regular basis. Analyzing demographic data that reflects the location of reservists in relation to available commissary stores and baseline estimates of existing patronage levels in the overall system should identify those geographic areas where increased sales are likely or possible. The distribution of stores is shown in table 1. Stores likely to experience sales increases could be identified by analyzing the demographic data and determining the baseline estimates of existing patronage levels. Potential sales increases could be estimated by several approaches or a combination thereof. For example, (1) reservists could be surveyed by interview at commissary stores determined to be most likely to experience increased patronage, (2) reservists could be surveyed on a nationwide basis to gain insight into any anticipated changes in buying patterns and commissary usage, and (3) commissary store managers could be interviewed for their views and estimates of potential increases (if their estimates were thought to be reliable). The costs and reliability of the results would vary under each approach; therefore, a judgment weighing the tradeoffs would be necessary to determine the specific approach to use. DOD and DeCA need to develop a detailed understanding of how increased sales would affect store costs, in particular labor costs, and commissary store workloads (sales levels, customer volumes, store hours, etc.). Such an understanding is important to the process of projecting the level of increase that would likely trigger actions affecting appropriated funding levels—additional personnel hours, overtime, and additional hiring. A major supermarket chain operating in the Washington, D.C., area uses a methodology that involves the comparison of historical store sales and payroll hours to estimate and project the effect of increased sales on personnel and other costs at new stores or at existing stores with changing demographics. For example, comparing the personnel and other costs of a store with annual sales of $20 million to one with 10-percent higher sales, that is, one with sales of $22 million, provides valuable and reliable indications of the additional number of personnel hours needed to support a 10-percent increase at a store with historical sales of $20 million. Such a methodology could be applied to the commissary system to develop a range of estimates of the impact on personnel and other costs generated from various percentages of projected increased patronage and sales from reservists, that is, 5, 10, 15, 20, etc. The increased personnel costs, if any, calculated at store and regional levels for each projected sales increase would roughly equal the potential impact range on the level of appropriated fund support resulting from granting reservists unlimited commissary access. We recommend that the Secretary of Defense ensure that any future legislative proposal to expand commissary access for military reservists be supported by a methodologically sound analysis that estimates the potential impact on appropriated fund support for the commissary system. DOD concurred with our recommendation and stated that DeCA and the Office of the Assistant Secretary of Defense (Reserve Affairs) will jointly develop a survey targeted to ascertain the impact expansion of commissary access to military reservists will have on the commissary system. According to DOD, the results of the survey and subsequent analysis will be used to determine the feasibility of future legislative proposals to expand commissary access to military reservists. Where appropriate, we have incorporated DOD’s comments and other points of clarification throughout the report. Appendix I explains our scope and methodology. Appendix II shows charts depicting budget and obligation information for DeCA since it was established. Appendix III contains a reproduction of DOD’s comments. The major contributors to this report are listed in appendix IV. We are sending copies of this report to interested congressional committees and Members of Congress; the Secretary of Defense; the Director, DeCA; and the Director, Office of Management and Budget. We will also make copies available to others on request. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Our study of the Department of Defense (DOD) policy regarding access to the commissary system by military reservists and proposed changes to allow unlimited access was conducted primarily at offices in the Office of the Secretary of Defense in the Pentagon and at Defense Commissary Agency (DeCA) headquarters at Fort Lee, Virginia. We interviewed DeCA officials and reviewed and analyzed financial data for headquarters, regions, and commissaries within those regions to obtain information regarding DeCA’s budget and funding sources and the application of funds from each source. We did not verify the accuracy of the data provided by these officials. At Fort Lee, Virginia, we toured the commissary store and interviewed the manager to obtain information on commissary prices, sales, and operations. We interviewed DOD officials in the offices of the Assistant Secretary of Defense (Reserve Affairs), the Assistant Secretary of Defense (Force Management and Personnel), and the DOD Comptroller’s Office to gain information on the rationale for granting reservists unlimited commissary access and their views on the issue. We were also briefed by representatives of the Reserve Officers’ Association, the Military Coalition, and the Food Marketing Institute. In addition, we contacted a local grocery chain to discuss its methods to evaluate and project sales increases on their operations and costs. We conducted our review from April 1996 to December 1996 in accordance with generally accepted government auditing standards. APF (Overseas) APF (United States) Charles W. Perdue The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed proposals to change the Department of Defense (DOD) policy that limits military reservists' access to commissary stores, focusing on the: (1) evolution of the policy on military reservists' access to the commissaries and proposals to change that policy; (2) sources of the Defense Commissary Agency's (DCA) funding; and (3) information needed to analyze the impact on appropriated funds of granting military reservists unlimited access to the commissary system. GAO found that: (1) commissary access for military reservists before 1986 was limited to a maximum of 14 days and was authorized only during periods of active duty training; (2) since 1990, reservists have been authorized to earn 12 visits a year to the commissary system in addition to access during any period of active duty service; (3) DOD has submitted three proposals since 1990 to grant reservists unlimited access, but none have been adopted by Congress because of concerns about the impact such a change might have on the level of appropriated funds and the concerns expressed by civilian grocery providers about the impact on their businesses; (4) the commissary system is funded primarily from an annual appropriation that has averaged about $1 billion for fiscal years (FY) 1992 through 1996; (5) the other funding source is the 5-percent surcharge that is added to each sale in all commissary stores, which has provided an average of about $315 million over the same period; (6) while DOD has proposed legislation to grant reservists unlimited commissary access, it has not developed estimates of the potential financial impacts of such a policy change; (7) potentially, any increase in the commissary customer base, such as granting unlimited access to reservists, could increase the sales and overall workload of the commissary system and increase personnel costs, which could, in turn, increase the level of appropriated funds needed for commissary operations or, at least, cause funding levels to be higher than they would otherwise be; (8) DOD's FY 1996 and 1997 proposals called for variations of a 1-year field test to identify the effects of increased access for reservists on commissary operations; (9) field tests would give specific individuals unlimited commissary access for 1 year to develop impact studies; (10) such a test runs the risk of appearing to withdraw a benefit following the test's conclusion; (11) GAO believes that a methodologically sound study, using data that could be developed by DOD and DCA, could provide reliable estimates of the financial impact of granting reservists unlimited commissary access; and (12) key elements of such a study would be to: (a) establish baseline data by determining the current level of reservist patronage of the commissary system; (b) correlate commissary locations in the United States with reservist population concentrations to identify locations with the potential to experience increased patronage; and (c) estimate the effects of increased commissary sales/workloads on operating costs and the level of appropriated fund support needed.
In 1980, the Congress passed the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), creating the Superfund program to clean up highly contaminated hazardous waste sites. CERCLA authorizes EPA to compel the parties responsible for the contaminated sites to clean them up. The law also allows EPA to pay for cleanups and seek reimbursement from the parties. EPA places sites that it determines need long-term cleanup action on its National Priorities List (NPL). As of early 1999, there were 1,264 sites on or proposed for the NPL. Another 182 sites had completed the cleanup process or were determined not to need cleanup and had been deleted from the NPL. Once listed, the sites are further studied for risks, and cleanup remedies are chosen, designed, and constructed. EPA relies extensively on contractors to study site conditions and conduct cleanups. Cleanup actions fall into two broad categories: removal actions and remedial actions. Removal actions are usually short-term actions designed to stabilize or clean up hazardous sites that pose an immediate threat to human health or the environment. Remedial actions are usually longer term and more costly actions aimed at permanent remedies. According to a 1998 report by the Environmental Law Institute, all 50 states have established their own cleanup programs for hazardous waste sites. In addition to handling less dangerous sites, some of the state programs can handle highly contaminated sites, whose risks could qualify them for the Superfund program. Some states initially patterned their cleanup programs after the Superfund program but over the years, in an effort to clean up more sites faster and less expensively, have developed their own approaches to cleaning up sites. States accomplish cleanups under three types of programs: (1) voluntary cleanup programs that allow parties, who are often interested in increasing sites’ economic value, to clean them up without state enforcement actions; (2) brownfields programs that encourage the voluntary cleanup of sites in urban industrial areas to enable their reuse; and (3) enforcement programs that oversee the cleanup of the most serious sites and force uncooperative responsible parties to clean up their sites. States generally use their voluntary and brownfields programs to clean up less complex sites by offering various incentives to responsible parties, such as reduced state oversight. States maintain that these programs accomplish site cleanups quickly and efficiently. Some states also maintain cleanup funds to pay all or a portion of the costs of cleanups at sites for which responsible parties that are able to pay for full cleanups cannot be found. The states vary greatly in the resources that they have devoted to cleanups. For example, the 1998 Environmental Law Institute study determined that states had cleanup funds totaling $1.4 billion as of the end of the states’ 1997 fiscal year, with 6 states having fund balances of $50 million or more and 26 states having fund balances of less than $5 million. The study also reported that states spent a total of $565 million on their cleanup programs in fiscal year 1997, with 2 states spending $50 million or more and 27 states spending less than $5 million. Even though cleanups have taken a long time to accomplish, if it maintains its current pace, the Superfund program will complete the construction of cleanup remedies at the great majority of current NPL sites within the next several years. In our March 1997 report, we said that cleanups completed in 1996 took an average of 10.6 years. Much of the time taken to complete cleanups was spent during the early planning phases of the cleanup process during which cleanup remedies are selected. We said that less time had been spent on actual construction work at sites than on the selection of remedies. Now, however, most NPL sites have been in the cleanup process for a long time and have moved beyond the remedy selection phase. Last year, we reported that EPA had completed the selection of remedies at about 70 percent of the NPL sites as of the end of fiscal year 1997. It had plans to complete, by the end of fiscal year 1999, remedies for about 67 percent of the federally owned or operated sites and 95 percent of the nonfederal sites that were listed as of the end of fiscal year 1997. EPA reports that it has completed the construction of cleanup remedies at 585 sites as of January 1999; will complete construction at 85 sites in each of fiscal years 1999 and 2000; and will finish a total of 1,200 sites by 2005. Groundwater cleanups will continue at many of these sites after the completion of remedy construction. These completion rates reflect EPA’s decision to make the completion of construction at existing sites the Superfund program’s top priority and to reduce new entries into the program. About 89 percent of the NPL sites were placed on the list between 1982 and 1990. Figure 1 shows the number of sites listed on the NPL and the number of sites where the construction of the cleanup remedy was completed during the years 1986 through 1998. Under the Superfund program, in addition to its remedial work, EPA has conducted removals at 595 NPL sites and 2,591 other contaminated sites. Cleanup work has also been conducted at sites where construction of the final cleanup remedy has not yet been completed. At the request of this committee, we are conducting a review to determine the extent of this ongoing cleanup activity. For several years, GAO has included the Superfund program on its list of federal programs that pose significant financial risk to the government and the potential for waste and abuse. We included Superfund on the list because of (1) problems with the management of cleanup contractors, (2) insufficient recovery of cleanup costs from responsible parties, and (3) the absence of risk-based priorities for site cleanups. EPA has corrected some of these problems, but enough remain that we have not yet been able to remove Superfund from the high-risk list. I would like to review these problems and EPA’s response. First, we raised concerns about several contracting practices. We said that EPA had a backlog of more than 500 audits of its Superfund contracts. The purpose of these audits is to evaluate the adequacy of contractors’ policies, procedures, controls, and performance. The audits are necessary for effective management and are a key tool for deterring and detecting waste and abuse. The agency has now almost eliminated its backlog of contract audits. We also found that EPA was approving contractors’ cleanup cost proposals without estimating what the work should cost. As a result, the agency could not negotiate the best contract price for the government. In response, EPA is now developing its own cost estimates and using them to guide its price negotiations with contractors. However, EPA was still having problems developing accurate estimates in about half the cases we recently reviewed. Furthermore, many of the cost estimators in the EPA regions told us that they lacked the experience and historical data they needed to do a better job at developing these estimates. EPA has requested the U.S. Army Corps of Engineers, an agency with extensive contracting experience, to conduct an assessment of EPA’s cost-estimating practices and recommend potential improvements. The assessment is still ongoing and will be completed in mid 1999. Unless EPA ensures that its regions implement and sustain corrective measures resulting from this review, problems can reoccur. EPA has taken similar corrective actions in the past, yet we continue to find problems with estimates. Lastly, with respect to contracting, we reported that EPA had difficulty controlling the overhead, or program support costs, of its contractors. To ensure that it had enough contractors to conduct cleanups, EPA hired a large number of contractors—more, it turned out, than it actually needed. Even though it did not have enough cleanup work to keep them all busy, it had to pay their overhead costs (i.e., the costs of their maintaining the capacity to respond to work assignments--such as office space). Although EPA cut in half the number of contractors that it keeps in place, our recent work indicates that this reduction may not have been enough. We found that, for the majority of contracts we reviewed, EPA continues to pay overhead costs ranging from 16 percent to 76 percent of the overall contract’s costs, exceeding EPA’s 11 percent target. In addition, persistent high overhead costs and uncertainty about the future size of the program raise broader questions about the type and the number of contracts EPA really needs to have in place. Even though CERCLA makes parties who are responsible for contaminated sites liable for cleanup costs, we have repeatedly reported that EPA has not charged responsible parties for certain costs of operating the cleanup program--mainly indirect program costs, such as personnel and facilities. EPA has excluded about $3 billion—about 20 percent of the $15 billion it has spent on Superfund through fiscal year 1997—in indirect costs from final settlements with responsible parties. In the early years of the program, EPA took a conservative approach to allocating indirect costs to private parties because it was uncertain which indirect costs the courts would agree were recoverable if parties legally challenged EPA. The agency could lose the opportunity to recover at least a half billion dollars more if it does not soon reverse this practice. Recently, Superfund program officials have developed a new way to determine recoverable indirect costs that could increase EPA’s cost recoveries, but the Superfund program has not yet used this new method because it is waiting for approval from EPA and the Justice Department. The final Superfund issue that we discussed in our high-risk series is the absence of a system for prioritizing sites for cleanup based on the risk they pose to human health and the environment. EPA has partially corrected this problem. In 1995, it created the National Prioritization Panel to help it set funding priorities for sites at which remedies had been selected and that were ready for cleanup. The panel, which is composed of regional and headquarters cleanup managers, ranks all of the sites ready for cleanup construction nationwide on the basis of the health and environmental risks and other project considerations, such as cost-effectiveness. EPA then approves funding for projects on the basis of these priority rankings. EPA, however, does not use relative risk as a major criterion when deciding which of the eligible sites to place on the NPL. In our discussions with EPA managers responsible for assessing sites for Superfund consideration, we found that the agency relies on the states to choose which of the eligible sites to refer to EPA for placement on the NPL. States refer sites after selecting those that they will address through their own enforcement or voluntary cleanup programs. The EPA cleanup managers with whom we talked expect that future sites placed on the NPL will not necessarily be the most risky but, rather, those that the states find to be large, complex, and therefore costly, or those without responsible parties willing and able to pay for the cleanup. Because EPA does not usually track the status of cleanups that take place outside of the Superfund program, EPA does not know if the worst sites in the nation are being addressed first. Some EPA regions are encouraging their states to voluntarily provide EPA with information on the cleanup status of the sites that the states are addressing and that EPA considers as potentially posing significant risk. In addition to our work on the high-risk aspects of the Superfund program, we have conducted detailed analyses of spending in the program. In summary, we have reported that the share of Superfund expenditures that go to cleanup contractors for the study, design, and implementation of cleanups increased from fiscal years 1987 through 1996, but declined in fiscal year 1997. We also reported that between fiscal years 1996 and 1997, EPA’s Superfund costs for administration and support activities correspondingly increased (see fig. 2). As you know, we are currently conducting additional analysis of the Superfund program’s expenditures for this Committee and others. We plan to report on the results of this work in May. EPA’s inventory of potential NPL sites contains sites that have been awaiting a decision for several years or more on whether they should be listed on the NPL. EPA and state officials believe that many of these sites need cleanup work, but the respective cleanup responsibilities of EPA and the states have not been established. As of the end of fiscal year 1997, EPA’s Superfund database indicated that the risks of over 3,000 sites had been judged on the basis of preliminary evaluations to be serious enough to make the sites potentially eligible for the NPL. EPA classified these sites as “awaiting an NPL decision.” Information about the nature and the extent of the threat that these sites pose to human health and the environment, the extent of states' or EPA's cleanup actions at the sites, and the states' or EPA's cleanup plans for the sites is important to determining the future size of the Superfund program. We surveyed EPA regions, other federal agencies, and the states to (1) determine how many of the over 3,000 sites remain potentially eligible for the NPL; (2) identify the characteristics of these sites, including their health and environmental risks; (3) determine the status of any actions to clean up these sites; and (4) collect the opinions of EPA and other federal and state officials on the likely final disposition of these sites, including the number of sites that are expected to be placed on the NPL. We reported the results of our surveys in two November 1998 reports. On the basis of our surveys, we determined that 1,789 of the 3,036 sites that EPA's database classified as “awaiting an NPL decision” in October 1997 are still potentially eligible for placement on the list. EPA, other federal agency, and state officials responding to our survey said that many of these sites presented risks to human health and the environment. According to these officials, about 73 percent of the sites have caused contamination in groundwater and another 22 percent could contaminate groundwater in the future; about 32 percent of the sites caused contamination in drinking water sources and another 56 percent could contaminate drinking water sources in the future; 96 percent of the potentially eligible sites are located in populated areas within a half-mile of residences or places of regular employment; and workers, visitors, or trespassers may have direct contact with contaminants at about 55 percent of the sites. We asked officials of EPA, other federal agencies, and states to rank the risks of the potentially eligible sites. These officials collectively said that about 17 percent of the potentially eligible sites currently pose high risks to human health and the environment, and another 10 percent of the sites (for a total of 27 percent) reportedly may also pose high risks in the future if they are not cleaned up (see fig. 3). For about one-third of the sites, the officials said that it was too soon or they needed more information to determine the seriousness of the sites' risks, or they provided no risk characterization. Officials responding to our surveys said that some cleanup activities (which they stated were not final cleanup actions) have taken place at 686 of the potentially eligible sites. These actions were taken at more than half of the sites that were reported to currently or potentially pose high risks, compared to about a third of the sites that have been reported to currently or potentially pose average or low risks. No cleanup activities beyond initial site assessments or investigations have been conducted or no information is available on any such actions at the other 1,103 potentially eligible sites. Many of the potentially eligible sites have been in state and EPA inventories of hazardous sites for extended periods. Seventy-three percent have been in EPA's inventory for more than a decade. No cleanup progress was reported at the majority of the sites that have been known for 10 years or more. It is uncertain whether most potentially eligible sites will be cleaned up; when cleanup actions, if any, are likely to begin; who will do the cleanup; under what programs these activities will occur; and what the extent of responsible parties' participation will be. We did not receive enough information from our survey to determine what cleanup actions will be taken at more than half of the 1,789 potentially eligible sites and whether EPA or the states will take these actions (see fig. 4). We are making no forecast of the number from the group of 1,789 potentially eligible sites that will be added to the NPL in the future. However, EPA and state officials collectively believed that 232 (13 percent) of the potentially eligible sites might be placed on the NPL in the future. Officials estimated that almost one third of the potentially eligible sites are likely to be cleaned up under state programs but usually could not give a date for the start of cleanup activities. State officials stated that, for about two-thirds of the sites likely to be cleaned up under state programs, the extent of responsible parties' participation is uncertain. This is important because officials of about half of the states told us that their state's financial capability to clean up potentially eligible sites, if necessary, is poor or very poor. In addition, officials of about 20 percent of the states said that their enforcement capacity (including resources and legal authority) to compel responsible parties to clean up potentially eligible sites is fair to very poor. Our November report recommends that EPA review its inventory of potential NPL sites to determine which of them need immediate action and which will require long term cleanup action and, in consultation with the states, develop a timetable for taking these actions. In conclusion, Mr. Chairman, despite the long durations of cleanups in the past, Superfund is within sight of completing the construction of cleanup remedies at most of the sites on the NPL. While recognizing this accomplishment, we believe that management problems and cost control issues we have reported on for several years remain to be solved. Because few sites have been admitted to the program in recent years, the NPL pipeline is clearing out. On the other hand, there are many sites in EPA’s inventory of potential NPL sites that still need attention and possible cleanup, but EPA and the states have postponed decisions, sometimes for up to 10 years or longer, on how to address them. Over the last two decades, the states have built up the capacity to deal with site cleanups to varying degrees. Some have substantial programs, but others have limited resources and report that their ability to pay for cleanups is poor. Furthermore, not all of the states have adequate enforcement authority to force responsible parties to pay for cleanups. Because states generally now have the lead for screening sites for NPL consideration, future NPL sites may disproportionately represent complex cleanups for which responsible parties cannot be found or are unwilling to ante up the full cost of the cleanup. We have recommended that EPA work with the states to assign responsibility among themselves for these sites. The Superfund reauthorization process gives the Congress an opportunity to help guide EPA and the states in allocating responsibility for addressing these sites. Mr. Chairman, this concludes my prepared statement. I will be happy to respond to your questions or the questions of committee members. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the status and management of the Superfund program and the outlook for the program's future, focusing on: (1) progress made toward cleaning up sites in the program; (2) continuing management problems; and (3) factors affecting Superfund's future workload. GAO noted that: (1) in the past GAO has called attention to the slow pace of cleanups in the Superfund program; (2) however, 17 years after sites were first placed on the Superfund list, many of the sites have progressed a considerable distance through the cleanup process; (3) decisions about how to clean up the great majority of these sites have been made, and the construction and cleanup remedies have been completed at over 40 percent of the sites; (4) the Environmental Protection Agency's (EPA) goal is to complete the construction of remedies at 1,200 sites by 2005; (5) work to clean up groundwater will continue at many sites after remedies are constructed; (6) despite the progress that Superfund has made toward site cleanups, certain management problems persist; (7) these problems include the: (a) difficulty in controlling contract costs; (b) failure to recover certain federal cleanup costs from the parties who are responsible for the contaminated sites; and (c) selection of sites for cleanup without assurance that they are the most dangerous sites to human health and the environment; (8) these problems have caused GAO to include the program on its list of federal programs vulnerable to waste and abuse; (9) furthermore, GAO's analysis indicates that the costs of on-site work by cleanup contractors represent less than half of the spending in the program; (10) there is considerable uncertainty about the future workload of the Superfund program; (11) resolving this uncertainty depends largely on deciding how to divide responsibility for the cleanup of sites between EPA and the states; (12) the number of sites that have entered the Superfund program in recent years has decreased as EPA has focused its resources on completing work at existing sites and the states have developed their own programs for cleaning up sites; (13) according to EPA and state officials who responded to the survey, a large number of sites in EPA's inventory of potential Superfund sites are contaminating groundwater and drinking water sources and causing other problems and may need cleanup; (14) GAO recommended that EPA work with the states to assign responsibility for these sites among themselves; and (15) the Superfund reauthorization process gives Congress an opportunity to help guide EPA and the states in allocating responsibility for addressing these sites.
In July 2007, the USD (AT&L) established CSBs for every current and future major defense acquisition program in development as a measure to limit requirements change and avoid cost increases. The CSBs were to have a broad membership, including senior representatives from the offices of USD (AT&L) and Joint Staff. CSBs were intended to review all requirements and significant technical configuration changes with the potential to adversely affect the program. The USD (AT&L) directed that these changes should generally be rejected or deferred unless funds and schedule adjustments could be identified to mitigate their effects. In addition, program managers were asked to identify options to reduce program cost or moderate requirements, referred to as “descoping” options, on a roughly annual basis. USD (AT&L) also instructed that, while policy would be to keep within planned costs as much as possible even at the expense of scope and content, all expected increases in program costs must be budgeted at the absolute earliest opportunity. USD (AT&L) incorporated CSBs into DOD’s primary acquisition policy— DOD Instruction 5000.02—in December 2008. In October 2008, Congress enacted the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009, which required the establishment of CSBs for the major defense acquisition programs of the military departments. According to the statute, a CSB must meet at least once each year for each of these programs. The statute also provided direction on CSB membership and responsibilities. It requires CSBs to include the appropriate service acquisition executive as chair and include representatives from USD (AT&L), the Chief of Staff for the armed forces, representatives from other armed forces as appropriate, the Joint Staff, the comptroller of the military department, the military deputy to the service acquisition executive, the program executive officer for the program concerned, and others as appropriate;  prevent unnecessary changes to programs that could have an adverse impact on program cost or schedule, mitigate adverse cost and schedule effects of changes that may be required, and ensure that each program delivers as much planned capability as possible at or below the planned cost and schedule; review and approve or disapprove any proposed changes to program requirements or system configuration with the potential to adversely affect cost and schedule; and review and recommend proposals that could reduce requirements and improve cost and schedule. In addition, the statute provided program managers the authority to  object to adding new requirements that would be inconsistent with previously established parameters unless approved by the CSB and  propose opportunities to reduce program requirements to improve cost and schedule consistent with program objectives. In our March 2010 assessment of selected weapon programs, we reported that only 7 of the 42 programs we assessed held CSB meetings in 2009. As a result, in the Senate report accompanying the bill for the Ike Skelton National Defense Authorization Act for Fiscal Year 2011, the Senate Armed Services Committee directed USD (AT&L) to take appropriate steps to ensure that CSBs meet at least once a year to consider the full range of proposed changes to program requirements or system configuration for each major defense acquisition program. The military departments’ compliance with statutory CSB requirements varied. The Air Force and Navy did not fully comply with the requirement to hold annual CSB meetings for all major defense acquisition programs in 2010; the Army did comply. In total, the military departments held an annual CSB meeting for 74 of 96 major defense acquisition programs they managed in 2010. According to our survey results, when the military departments held CSB meetings, 19 programs endorsed requirements or configuration changes. In most of these cases, strategies were developed to mitigate any effect on a program’s cost and schedule—a key provision in the statute and DOD policy. However, key acquisition and requirements personnel were often absent from Air Force and Navy CSB meetings when these issues were discussed. Two major defense acquisition programs—the Ballistic Missile Defense System (BMDS) and the Chemical Demilitarization-Assembled Chemical Weapons Alternatives programs, which are managed by DOD components rather than military departments—are not subject to the CSB provisions in statute, but rather to DOD policy, because the statute only applies to programs overseen by military departments. This policy differs from the statute in that it only requires major defense acquisition programs in development to hold annual CSB reviews and does not require the same members, including the comptroller of the military department. The Air Force and Navy did not hold CSB meetings for all of their major defense acquisition programs in 2010. The Air Force did not hold CSB meetings for 13 of 31 programs, and the Navy did not hold CSB meetings for 9 of 37 programs. The Army held a CSB meeting for each of its 28 major defense acquisition programs. Of the 96 major defense acquisition programs managed by the military departments, 74 held CSB meetings in 2010 and 22 failed to do so. Table 1 shows how many programs had CSB meetings by military department. Of the 22 programs that did not have CSB meetings in 2010, 9 programs had meetings in early 2011. In addition, according to the Air Force and Navy, 8 other programs were in the process of being completed or cancelled. Table 2 includes explanations from the Air Force and Navy about why CSB meetings were not held for individual programs. For each of the military departments, when a CSB meeting reviewed requirements or configuration changes, most were endorsed and strategies to mitigate the effects on a program’s cost and schedule were developed and discussed. However, most of the programs we surveyed did not present requirements or configuration changes to be approved or rejected at their fiscal-year-2010 CSB meetings. Specifically, our survey showed the following results:  Air Force: 6 CSB meetings reviewed requirements or configuration changes, 5 of these meetings endorsed changes, and 4 discussed the cost and schedule effects and ways to mitigate them.  Army: 6 CSB meetings reviewed and endorsed requirements or configuration changes, and 4 of these discussed the cost and schedule effects and ways to mitigate them.  Navy: 10 CSB meetings reviewed requirements or configuration changes; 8 meetings endorsed changes, and 7 of these discussed the cost and schedule effects and ways to mitigate them. The Navy did not hold CSB reviews for all programs that experienced requirements changes in fiscal-year 2010. According to our survey results, three Navy programs changed system requirements or specifications yet did not hold a CSB meeting. Two of these programs, the Advanced Anti-Radiation Guided Missile and the Remote Minehunting System, held other high-level reviews during this period—two program management reviews and a critical Nunn-McCurdy breach review, respectively—and officials reported that a third program, the Expeditionary Fighting Vehicle, did not conduct its CSB meeting because DOD proposed canceling the program. Key acquisition and requirements personnel were absent from many of the CSB meetings held by the Air Force and Navy in 2010. The CSB provision in the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 lists seven officials or offices that should be part of a CSB, including the service acquisition executive who should serve as the chairperson of the CSB; representatives from the acquisition, requirements, and funding communities; and others as appropriate. Army CSB meetings held in 2010 included the full array of board members in all but one case. Although USD (AT&L) was invited to the meeting in this case, Army officials reported that the office did not send a representative. The medium of CSB board members’ participation also varied among the military departments. The Army conducted all its CSB meetings in person, whereas both the Air Force and the Navy conducted virtual, otherwise known as paper, CSB meetings for certain programs in 2010 and early 2011. The Air Force held all of its 2010 CSB meetings without key acquisition participants listed in the CSB statute. According to Air Force officials, their CSB meetings may be chaired by either the service acquisition executive or the principal military deputy to provide for flexibility in scheduling meetings. Generally, the principal military deputy acts as chair in the place of the service acquisition executive and does not attend those meetings that the service acquisition executive chairs. According to the attendee lists provided by the Air Force, only 2 of the 18 CSB meetings held were attended and chaired by the service acquisition executive. At one of those meetings neither the principal military deputy nor a representative of the comptroller was in attendance although officials report that both had been invited. The CSB meetings the service acquisition executive did not attend included numerous discussions of changes that could affect programs’ costs and schedules, including requirements and configuration changes or descoping opportunities. For example, one meeting discussed changes to the Space Based Infrared System’s architecture that could accelerate the program’s delivery of initial capability by 2 years but would cost an additional $45 million. The Air Force also allows paper CSBs to fulfill the requirement for an annual CSB for programs it believes are stable. A program is eligible to conduct paper CSB meetings if (1) it has a Probability of Program Success score of greater than 80; (2) it has made no requirements and/or significant technical configuration changes since the last CSB that have the potential to affect the cost and schedule of the program; (3) when in production, it is in steady state production but has not reached 90 percent of planned expenditures completed or 90 percent of quantities delivered; and (4) descoping options will not yield any real cost savings. The Air Force did not conduct any paper CSBs in 2010; however, 6 of the 13 Air Force programs that did not hold a CSB meeting in 2010 conducted paper reviews in January 2011. According to Air Force officials, the process for these paper reviews began in December 2010. The Navy held most of its 2010 CSB meetings without key acquisition and requirements personnel. The Navy has incorporated CSB meetings into the Navy’s gate review process and uses the gate 6 review, with the service acquisition executive or his designee acting as chair, to fulfill the requirement for an annual CSB. However, the Navy’s policy on gate reviews does not include the Joint Staff—a key player in the requirements process and a participant required by statute and DOD policy—as a participant, and at least 22 of the 28 CSB meetings held in 2010 lacked a representative of the Joint Staff. As a result of our review, Navy officials reported that they are revising their policy and procedures for CSBs to ensure the Joint Staff is invited to future CSB meetings. Navy policy allows the service acquisition executive to delegate the chair to another official within the Navy’s acquisition office, which officials stated provides flexibility in scheduling CSBs. In practice, this resulted in meetings where required members of the CSB did not participate in discussions of requirements, configuration, or descoping. In 2010, the Navy service acquisition executive chaired and attended 12 of the 28 CSB meetings and participated in at least 2 others, both CSBs conducted via paper. According to our review of CSB documentation, six CSB meetings clearly discussed descoping options, and the service acquisition executive did not attend any of the five held in person. The sixth meeting was a paper CSB and it is unclear whether the service acquisition executive participated. When the Navy service acquisition executive or others chair the CSB meeting, the principal military deputy typically does not attend. In addition, at least three CSB meetings in 2010 did not include a representative from USD (AT&L). The Navy also allows paper CSBs to fulfill the requirement for an annual CSB. In four cases, the Navy used paper CSBs to review requirement and configuration changes sometimes requiring millions of dollars or tens of millions of dollars in additional funding. According to Navy officials, Navy policy allows CSB members to reach decisions on issues of requirements and configuration by circulating briefing slides and memoranda rather than holding an actual meeting; however, there are not clear criteria specifying the circumstances under which a program may hold a paper CSB. Multiple Navy program managers stated that they do not understand which programs are eligible or when and how to request a paper CSB. In one case, a program manager stated that although the program was planning for and preferred a CSB meeting in person, Navy officials changed the format to a paper CSB a few days before the scheduled meeting time. Two major defense acquisition programs—the Ballistic Missile Defense System (BMDS) and the Chemical Demilitarization-Assembled Chemical Weapons Alternatives programs, which are managed by DOD components rather than military departments—are not subject to the CSB provisions in statute because the statute only applies to major defense acquisition programs overseen by the military departments. However, DOD acquisition policy, which requires CSBs for all major defense acquisition programs in development, applies to these programs. The Missile Defense Agency (MDA), which is responsible for the management of BMDS, did not hold a CSB for the system in 2010; however, it did conduct reviews that discussed many of the same issues and included some of the same participants as those required for CSBs. The Program Change Board manages the development, fielding, and integration of BMDS through separate program elements and ensures the integrity of the system as a whole. This board, which is the primary forum for discussing and mitigating changes to program elements’ requirements and configuration, met 42 times in 2010. The Program Change Board is chaired by the equivalent of a service acquisition executive—the director of MDA—and, according to an MDA official, includes the equivalent of the comptroller, the program executive officer, and the program manager. MDA policy also requires USD (AT&L) to be invited to Program Change Boards, and allows for the military services’ participation when deemed appropriate, but does not include the Joint Staff. The Missile Defense Executive Board oversees implementation of strategic plans and reviews the priorities and budget for BMDS as a whole. The Missile Defense Executive Board includes the Joint Staff as well as the MDA director and an array of Office of the Secretary of Defense (OSD) and military service representatives, but according to DOD it does not generally discuss requirements and configuration at the element level. The executive board met seven times in 2010. The Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs, who is responsible for the management of the Chemical Demilitarization-Assembled Chemical Weapons Alternatives program, also did not hold a CSB in 2010. However, a similar board—the Chemical Demilitarization Program Strategic Governance Board—met three times in 2010 to discuss program progress, including how it is performing against its requirements and funding issues, including those related to significant cost and schedule growth. In 2010, the Assistant Secretary acted as the chair for this board which also includes representatives from the OSD comptroller, the Joint Staff, and the Army. The CSB requirements in DOD’s primary acquisition instruction are not fully consistent with the provisions in statute. Most significantly, the instruction only requires CSB meetings for major defense acquisition programs in development, rather than major defense acquisition programs in development and production. Additionally, the instruction does not include the comptroller as a CSB member. According to USD (AT&L) officials, the CSB provisions in statute may not have been fully incorporated into USD (AT&L)’s December 2008 revision of DOD’s acquisition instruction because the statute was enacted in October 2008 and there was not enough time to reconcile them. USD (AT&L) is in the process of updating the instruction and is considering changes to the CSB requirements. USD (AT&L), according to officials, has also not consistently tracked whether programs are fulfilling the current requirements in DOD policy because the statute makes CSBs a military department responsibility. Individual programs varied in the extent to which they utilized CSBs to control requirements and mitigate cost and schedule risks. According to our survey results, the majority of CSB meetings neither reviewed requirement changes nor discussed options to reduce requirements or the scope of programs. We found a number of instances in which CSB meetings were effective in mitigating the effect of necessary changes, rejecting other changes, facilitating discussion of requirements, and endorsing descoping options with the potential to improve or preserve cost or schedule. Program managers, however, may be reluctant to recommend descoping options because of cultural biases about the role of a program manager, a preference not to elevate decisions to higher levels of review, and concerns that future funding will be cut. In an effort to increase descoping proposals, the Army and Air Force have issued additional descoping guidance and set savings or budget targets. The perceived effectiveness of the CSB meetings also varied based on the acquisition phase of a program and which CSB members participated. To further increase effectiveness and efficiency of CSBs, some of the military departments have taken steps to coordinate CSB meetings among programs that provide similar capabilities and align CSB meetings with other significant reviews. We identified individual examples from each military department in which CSB meetings were used to prevent or reject requirements or configuration changes, mitigate the cost and schedule effects of endorsed changes, facilitate the prioritization of requirements, and provide program managers with opportunities to reduce requirements or suggest other programmatic changes to lower costs and field systems faster. However, most of the program officials who held CSB meetings and responded to our survey reported that CSB meetings were not useful for preventing changes to requirements or configuration, mitigating the potential effects on cost and schedule when changes were endorsed, or recommending ways to improve a program’s cost and schedule by moderating requirements. In interviews with program officials, some explained that they did not utilize the CSB meetings to control requirements because they addressed requirement issues as they arose within the program rather than waiting for their program’s scheduled CSB meeting to occur. Others stated that their program was stable and that there were no requirement changes or descoping options to discuss. According to our survey results, reviews of CSB documentation, and interviews:  26 percent of the programs in our survey with CSB meetings reported that these meetings were useful forums to prevent changes to requirements. Moreover, 35 percent reported that the meetings were useful to make necessary changes to requirements. In an interview, several program officials stated that the mere suggestion of convening a CSB meeting to discuss a new requirement was enough to deter changes.  25 percent of the programs in our survey with CSB meetings reported that these meetings were useful forums to prevent changes to technical configuration. Conversely, 23 percent reported that the meetings were useful to make necessary changes to technical configurations. Our review of minutes and presentations also show at least one CSB meeting that rejected a change that had the potential to adversely affect program cost; the August 2010 CSB review for the LPD 17 amphibious ship program rejected a proposed configuration change that would have added new equipment to the ship at an estimated cost of $26 million.  Some CSB meetings also included discussions of how to prioritize requirements. For example, according to officials, the Air Force used a June 2010 CSB meeting for the Global Hawk—an unmanned surveillance aircraft—to prioritize joint urgent operational needs. According to program officials, the Global Hawk program has received numerous requests to add new capabilities to the platform due to its use in current operations. The program manager stated that the CSB meeting provided the opportunity to present the costs and benefits of those requests to decision makers and receive guidance from them on which ones to pursue or defer.  28 percent of the programs in our survey with CSB meetings reported that these meetings were useful forums to mitigate the potential cost and schedule effects of changes brought to the CSB for consideration. Moreover, 18 percent of programs reported CSB meetings were useful forums to mitigate the potential cost and schedule effects of changes made as a result of the CSB. The Vertical Take Off and Landing Tactical Unmanned Aerial Vehicle program used a CSB meeting to discuss ways to restructure the program in response to cost growth. At the meeting, the members of the CSB encouraged the program manager to go beyond his proposals and investigate changes to program quantities, contract strategy, and operational plans when restructuring the program, in order to reduce cost.  CSB meetings seem to have been effective in mitigating the cost and schedule effects of changes or only endorsing changes that would not affect costs and schedules. Of the 19 programs in our survey in which a CSB meeting endorsed changes to requirements or technical configuration, 1 reported an increase in program cost and 2 reported a delay in the delivery of an initial operational capability.  30 percent of programs in our survey with CSB meetings reported that these meetings were useful forums to offer options to lower costs and field systems faster. Survey results show that descoping options were presented for 19 programs and those options were endorsed for 8 of them. For example, at the December 2009 CSB meeting for the Air Force’s Joint Air-to-Surface Standoff Missile, the program office recommended adopting the extended range version’s lower reliability requirement for the baseline missile. The program office stated the existing baseline requirement, which was 5 percent higher, had the potential to become a cost driver in testing for the program. The CSB endorsed the program office’s recommendation.  Program officials also reported that the exercise of formulating descoping options, regardless of whether or not they were endorsed, helped their office identify and develop mitigation strategies in the event costs increased. Table 3 provides examples of programs across the military departments that used CSB meetings to endorse requirement, configuration, or other programmatic changes to improve or preserve cost or schedule. Program managers may be reluctant to recommend descoping options to moderate requirements during a CSB meeting because of cultural biases about the role of a program manager, a preference not to elevate decisions to higher levels of review, and concerns that future funding will be cut. According to several acquisition officials, there is a cultural bias throughout DOD that the role of the program manager is to meet the requirements handed to them, not to seek to reduce them to achieve cost savings. In this context, if a program manager recommends reducing requirements, it may suggest the person is not managing the program or serving the warfighter well. Still others preferred to reduce requirements that were within their span of control through their program’s internal change-management process rather than waiting for a CSB meeting to ask permission. For example, the DDG-51 program office proposed changes to the ships’ configuration to reduce cost by removing or relocating equipment and the CH-53K program avoided cost by relaxing a requirement for self-sealing fuel tanks. Our interviews with program officials also suggest that there may be a reluctance to present descoping options at a CSB meeting because it could be interpreted as an opportunity to reduce the program’s budget. The Army and Air Force have both taken steps to encourage or require program managers to seek options to lower costs by reducing scope. Acquisition officials noted that the presentation of descoping options and the focus on reducing costs has increased in importance since CSBs were first established, as the budget environment has become more constrained. In a November 2010 memorandum, the Army emphasized the need for program officials to aggressively seek descoping opportunities with the goal of reducing per-unit or total program costs by 5 percent. Army officials stated that the memorandum was signed by senior leaders from the requirements, acquisition, and budgeting communities specifically to address the bias that reducing requirements is unacceptable. According to officials, the Air Force amended its guidance for CSB meetings to require programs to present three to four descoping options along with the effect of those options on performance and program execution, the dollar amount already invested, and the estimated savings likely to result. Program managers are instructed to treat the descoping options as a budgeting exercise and to present the decisions that would need to be made if the program’s current budget were reduced by 10, 20, and 30 percent. Several program offices told us that forcing programs to present options to reduce requirements or scope led them to spend time preparing options that were not viable or that they would have to recommend against implementing. The types of discussions for which CSBs were useful changed based on whether programs were in development or production. According to our survey results, programs in development found CSB meetings to be more useful than programs in production for making necessary changes to requirements or technical configuration, mitigating the potential cost and schedule effects of changes, and recommending proposals to improve program costs and schedule. Table 4 presents our survey results of program officials’ opinions on the usefulness of CSB meetings. Programs in development also proposed changes to requirements or configuration, presented options for reducing scope, and had those options endorsed at a higher rate than those in production. Even so, an official for one program in development stated that its CSB meeting was not effective because the program was meeting cost and schedule targets and its requirements were narrowly defined, which decreased opportunities for reducing scope. According to our survey results, a higher percentage of programs in production reported that CSBs were useful in preventing changes compared to programs in development. We have previously reported that stabilizing a program’s requirements and design well before production is important because changes have increasingly negative effects on cost and schedule the further a program progresses. Program officials were wary about using CSB meetings to try to reduce costs for programs in production either through requirements changes or reductions in scope because the configuration should be locked, the available trade space is probably limited, and potential changes could be disruptive. For instance, the E-2D program reported in its April 2010 CSB meeting that its configuration was extremely stable and, with development and demonstration almost complete, reducing the scope of the programs could prove detrimental because it could lead to redesigns or decreases in capability. Changes at this stage of a program can still have a positive effect on cost if they do not require extensive design changes. For example, the program manager for the Family of Medium Tactical Vehicles—which is well into production with over 40,000 vehicles fielded—recommended removing the self-recovery winch from some vehicles, resulting in savings of $9,535 per vehicle. CSBs provide a unique opportunity for program managers to address programmatic issues in front of a broad group of high-level decision makers that includes the acquisition, requirements, and funding communities. In some cases, the makeup of the CSB helped to accelerate the resolution of issues and facilitate decision making. For example, the Grey Eagle program utilized its CSB meeting to endorse an increase in the number of active units from 13 to 17. The program office reported that this decision, which otherwise may have taken years to approve and fund, was made and implemented quickly by the CSB because of the senior leadership present. Other program offices stated that the broad membership on CSBs, which includes key stakeholders and other interested parties, helps to create institutional buy-in for programmatic changes. CSB meetings also raised stakeholders’ awareness of cost increases. Specifically, CSB meetings provided the Joint Staff with its first knowledge of cost growth on at least four programs and triggered separate reviews by the Joint Requirements Oversight Council. When critical stakeholders are absent, the decision-making ability of the CSB may be limited. In particular, some programs with users from across the military services and organizations external to DOD reported that the utility of CSBs was limited when those users were not represented. For example, the primary users of the Air Force’s Global Positioning System IIIA program include the Army, Navy, and other organizations external to DOD. The September 2010 CSB meeting for the system did not include these stakeholders, and program officials stated that as a result, the CSB was not empowered to make significant changes to the program. The decisions made at CSB meetings can affect complementary programs, as well as the funding required for programs. As a result, acquisition and program officials told us there is value in aligning CSB meetings so they are held together with reviews of similar programs or sequencing them to occur before key funding decisions are made. For example, in 2010, the Army grouped programs into capability portfolios, such as aviation or precision fires capabilities, and held one CSB meeting to discuss requirement changes and descoping options for all the programs. These CSB meetings generally occurred after the Army’s capability portfolio reviews—which revalidate, modify, or terminate requirements and ensure the proper allocation of funds across programs—and reviewed, endorsed, and implemented the recommendations coming from them. Holding CSB meetings for capability portfolios can facilitate discussions about interoperability and interdependency and promote an examination of requirements and capabilities across programs, including potential redundancies. Officials also stated that if two well-executed, high-performing programs within the same portfolio were reviewed independently, those discussions might not take place. For example, the Army’s Excalibur—a precision-guided munition—and Guided Multiple Launch Rocket System were both relatively stable programs in production. However, according to officials, during a capability portfolio review, the Army identified an overlap in the two programs’ capabilities and missions and recommended reducing the number of Excalibur munitions to be procured. At the subsequent April 2010 CSB meeting, the Army reviewed and implemented the proposal, which reduced the cost of the Excalibur program by $893.5 million. According to acquisition officials, grouping programs in this manner can also ease the difficulty of scheduling a large number of meetings that require senior leadership participation. According to program officials, when CSB meetings were aligned with budget deliberations, it enabled an informed discussion of funding issues and rapid changes to program budgets. USD (AT&L)’s 2007 memorandum establishing CSBs stressed the importance of making necessary budget adjustments, especially those involving expected increases in program costs, at the earliest opportunity. In one example, the Army’s November 2009 CSB for the Patriot and Medium Extended Air Defense System programs corresponded with the service’s fiscal-year- 2011 budget-formulation process. Program officials stated that this helped facilitate the transfer of funds and efforts among the two programs, which had been endorsed by senior leaders from the acquisition and funding communities at the CSB. However, it may be functionally challenging to align CSB meetings with the budget formulation process in all cases, as CSB meetings in some cases must be event driven while the budget process is calendar driven. With the prospect of slowly growing or flat defense budgets for years to come, DOD must get better returns on its weapon system investments than it has in the past. CSBs, which are intended to ensure that a program delivers as much planned capability as possible at or below the expected cost, can be a key tool in furthering this goal. They represent a unique forum that brings together a broad range of high-level decision makers from the acquisition, requirements, and funding communities, who can make and implement decisions quickly. DOD’s experience with CSBs to date has already demonstrated their potential value—costly new requirements have been rejected, and options to moderate requirements and reduce program costs by millions of dollars have been endorsed. However, the efficiency and effectiveness of CSBs can still be improved. Ensuring key CSB members from the acquisition and requirements community are present at meetings could help build consensus more quickly and make decisions more efficiently. Similarly, while the law is silent on whether paper CSB meetings may be used to meet the annual requirement, holding in-person meetings may be more effective because a paper meeting may not provide the opportunity for in-depth discussion or proper oversight. Holding CSBs in conjunction with capability portfolio reviews and other similar meetings has the potential to expand opportunities to review and rationalize requirements across programs. Improving the connection between CSBs and the budget process and other reviews can help further efforts to match weapon system requirements with funding resources. Reviewing programs at CSBs on a case-by-case basis well into production would help decision makers identify cost savings and shift funding as warfighter needs and funding priorities change. Taken together, these steps have the potential to improve not only the efficiency and effectiveness of CSBs but also the affordability and execution of DOD’s major defense acquisition programs. We recommend that the Secretary of Defense take the following seven actions directing: the Navy to amend its policy on CSBs to ensure that all statutorily required participants, particularly the Joint Staff, are included; the MDA to amend its policy to ensure that all statutorily required participants for military department CSBs are included in MDA’s Program Change Board, particularly the Joint Staff, if it is to serve as an equivalent review;  USD (AT&L) to amend its acquisition instruction to:  ensure that all statutorily required participants, in particular the comptroller, are included on CSBs; require CSB meetings for major defense acquisition programs in production as well as development but also coordinate with the military departments and the Congress to evaluate the effectiveness of CSB meetings for programs well into production; and  develop the means to better track CSBs and ensure compliance with the requirement that CSBs hold a meeting at least once each year;  USD (AT&L) to work with DOD components to determine whether paper CSBs are as effective as in-person meetings and, if not, amend the acquisition instruction accordingly; and  DOD components to amend their policies to encourage alignment between CSB meetings and other complementary reviews whenever possible. DOD provided us with written comments on a draft of this report. In its comments, DOD concurred or partially concurred with all seven of our recommendations and agreed to take action to address six of them. The comments are reprinted in appendix II. DOD also provided technical comments, which we addressed in the report, as appropriate. In concurring with our recommendation that the Navy amend its policy on CSBs to include all statutorily required participants, DOD stated that the Navy has already issued two policy memorandums that do so. DOD also stated that the Navy will continue to issue policy guidance consistent with our recommendation. This will be particularly important as the Navy is currently in the process of revising its primary acquisition instruction. DOD also concurred with our recommendations to amend its acquisition instruction to ensure that all statutorily required participants are included in CSBs and that meetings occur for programs in development as well as those in production. DOD did not address the portion of our recommendation to coordinate with the military departments and the Congress to evaluate the effectiveness of CSB meetings for programs well into production. Given our mixed findings on the utility of CSB meetings late in production, we continue to believe it would be in the interest of the department to study this issue. DOD partially concurred with our recommendation that MDA amend its policy to ensure that all statutorily required participants for military department CSBs, in particular the Joint Staff, are included in MDA’s Program Change Board, if it is to serve as an equivalent review. In its comments, DOD stated that Joint Staff participation would provide little value because of the role of the Joint Staff in the acquisition of BMDS. In addition, DOD pointed out that the Joint Staff participates in the Missile Defense Executive Board, a forum in which strategic direction and funding priorities are established. However, we continue to believe that if the Program Change Board is to act as the forum for discussing configuration and requirements changes, it is important that the user communities, as represented by the Joint Staff, participate in these discussions. DOD partially concurred with our recommendations on improving the tracking of CSB meetings, determining the effectiveness of paper CSBs, and aligning complimentary reviews with CSB meetings, when possible. In its comments, DOD stated that it would address these issues in “best practices” guidance to the military departments. With regard to developing the means to better track CSB meetings and compliance with the requirement to hold a meeting at least once each year, DOD stated the best practices guidance will direct the military departments to ensure adequate tracking vehicles are in place. We continue to believe that USD (AT&L) should play a role in tracking compliance and holding the military departments accountable, given our findings that the military departments did not hold CSBs for all the required programs. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; USD (AT&L); and the Director of the Office of Management and Budget. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in appendix III. This report presents information on the Department of Defense’s (DOD) use of Configuration Steering Boards (CSB) for the major defense acquisition program portfolio in 2010. We used the Defense Acquisition Management Information Retrieval system to identify 98 active major defense acquisition programs. We defined an active program as one that issued a selected acquisition report in December 2009. This report presents information on all of these programs. One program, the Ballistic Missile Defense System, is managed by the Missile Defense Agency (MDA), which reports acquisition information on the system by functional elements. We reviewed nine elements and analyzed them separately from the rest of the major programs. We categorized programs by the five acquisition organizations designated as having oversight—Army, Navy, Air Force, MDA, and the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense programs—to assess trends in the use of CSBs. The selected acquisition report for each program designates the program’s acquisition organization. As the lead authority for joint programs rotates among the acquisition organizations as determined by the Office of the Secretary of Defense, we categorized all joint programs according to the service that was designated as the lead authority in the December 2009 selected acquisition report. All of the programs in our audit fall into one of two phases: engineering manufacturing and development (referred to as development) or production and sustainment (referred to as production). Development generally begins with the initiation of an acquisition program as well as the start of engineering and manufacturing development and generally ends with entry into production. Production generally begins with the decision to enter low-rate initial production. For most programs in our assessment, the placement of programs in one of these two phases was determined by the dates of their Milestone B/II and Milestone C/III decisions. For instance, we categorized programs that have held a Milestone B/II decision but not a Milestone C/III as in the development phase and those that have held a Milestone C/III decision as in the production phase. The dates of milestone decisions for the programs used in the audit were determined through use of the Defense Acquisition Management Information Retrieval system. Due to the nature of individual programs, select programs were not classified by milestone decision because they either have multiple increments that may begin production in advance of the notional Milestone C/III date, or the programs do not report milestone dates. In these cases, we used the program’s selected acquisition reports to determine the appropriate phase. The Navy often authorizes shipbuilding programs to begin production of the lead ship at Milestone B/II. We classified these programs as in the production phase. As the MDA programs develop systems’ capabilities incrementally instead of following the standard DOD acquisition model, we did not identify acquisition phases for Ballistic Missile Defense System elements. To assess the extent that DOD has complied with the statutory requirements for CSB meetings in 2010, we compared CSB execution to provisions in the statute that call for annual CSB meetings and discussion of specific content. To determine the extent to which DOD complied with the requirement to hold an annual CSB for each program, we analyzed CSB records provided by the acquisition organization we reviewed and, using these records, calculated the number of CSBs held for each program in calendar-year 2010. To determine whether the components established boards that included the statutorily required participants, we analyzed policy and procedure documentation from each of the components as well as attendance lists of CSBs held in calendar-year 2010, provided by the acquisition organizations we reviewed. To identify issues discussed at CSBs and actions resulting from these CSBs, we reviewed CSB documents and questionnaire data and interviewed acquisition officials. We also reviewed and analyzed current and draft documentation related to department and service-level CSB policies, directives, guidance, and instructions to determine if they establish a structure that would facilitate compliance with the statute; examples of these documents include Department of Defense Instruction 5000.02, Department of the Army Pamphlet 70-3 regarding Army Acquisition Procedures, SECNAV Instruction 5000.2D, Air Force Instruction 63-101, and Missile Defense Agency Directive 5010.18 regarding Acquisition Management. We also interviewed officials representing organizations that participate in CSBs or their equivalents including the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, Joint Staff, military service and MDA offices, program offices, and capabilities and requirements offices to address department, military service, and MDA policies and execution. To assess how effective CSBs have been controlling requirements and mitigating cost and schedule risks on programs, we analyzed CSB documentation to identify actions proposed and actions taken as a result of the CSB and their effect on cost, schedule, performance, and system configuration. We also asked program officials in our questionnaire to identify requirement changes or descoping options discussed at the CSB, the impact of decisions made, perceived effectiveness of the CSB, and explanations for not conducting a CSB, if applicable. To further analyze the effectiveness, challenges, and benefits of holding CSBs, we selected 17 programs for interviews. We based our selection on answers to our questionnaire, discussions with officials, and programmatic factors such as acquisition organization and phase. Specifically, we met with program officials at Wright Patterson Air Force Base, Ohio; Redstone Arsenal, Alabama; Washington Navy Yard in Washington DC; the Naval Air Station Patuxent River in Patuxent River, Maryland; and conducted video teleconferences with program officials at Picatinny Arsenal in New Jersey and at Los Angeles Air Force Base in El Segundo, California. We also interviewed acquisition officials, reviewed selected acquisition reports, and examined documentation related to service-level CSB policies, directives, guidance, and instructions to determine whether other reviews or acquisition processes influenced the effectiveness of CSBs. To collect information about DOD’s use of CSBs in fiscal year 2010, we developed and administered a Web-based questionnaire to the program offices of all 98 programs. Fiscal-year data was collected in our survey to be consistent with the Senate report language that contained our mandate. We administered separate questionnaires to nine Ballistic Missile Defense System elements and analyzed the results separately from the rest of the programs in our review. We fielded the survey from October 2010 to December 2010, and after extensive follow-up, we received responses from all 98 programs. Our questionnaire of the 98 program offices, was not a sample questionnaire, so it has no sampling errors. However, the practical difficulties of conducting any questionnaire may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question or limitations in the sources of information available to respondents can introduce unwanted variability into the questionnaire results. We took steps in developing the questionnaire, collecting the data, and analyzing the responses to minimize such nonsampling errors. For example, social science survey specialists designed the questionnaire in collaboration with GAO’s subject-matter experts. We conducted pretests with program managers to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the questionnaire was comprehensive and unbiased. For the pretests, we selected programs from each military department and from various phases of the acquisition life cycle. We conducted four pretests. We made changes to the content and format of the questionnaire after each pretest, based on the feedback received. When we analyzed the data, an independent analyst checked all computer programs to reduce risk of error. Since this was a Web-based questionnaire, respondents entered their answers directly into the electronic questionnaire, eliminating the need to key data into a database, minimizing error. We did not validate the data provided by the program offices, but reviewed the data and performed various checks to determine that the data were reliable enough for our purposes. Where we discovered discrepancies from reviewing responses and interviewing program offices, we clarified the data with the program office and made changes to the questionnaire data accordingly. In addition to the contact named above, Ronald E. Schwenn, Assistant Director; Noah B. Bleicher; MacKenzie Cooper; Morgan Delaney Ramaker; J. Kristopher Keener; Jean McSween; Kenneth E. Patton; and Brian Schwartz made key contributions to this report.
GAO has previously reported that requirements changes are factors in poor cost and schedule outcomes on Department of Defense (DOD) weapon programs. In 2007, DOD introduced Configuration Steering Boards (CSBs) to review requirement and configuration changes that could adversely affect programs. In 2008, Congress made annual CSB meetings a requirement for all of the military departments' major defense acquisition programs. In response to the Senate report accompanying the bill for the Ike Skelton National Defense Authorization Act for Fiscal Year 2011, GAO assessed (1) the extent to which DOD has complied with the statutory requirements for CSBs, and (2) the extent to which CSBs have been effective in controlling requirements and mitigating cost and schedule risks. To conduct this work, GAO surveyed DOD's major defense acquisition programs, reviewed CSB documentation, and interviewed relevant military service and program officials. The military departments varied in their compliance with the CSB requirements in statute. The Air Force and Navy did not fully comply with the requirement to hold annual CSB meetings for all major defense acquisition programs in 2010, while the Army did. In total, the military departments held an annual CSB meeting for 74 of 96 major defense acquisition programs they managed in 2010. According to GAO's survey results, when the military departments held CSB meetings, 19 programs endorsed requirements or configuration changes. In most of these cases, strategies were developed to mitigate the effects of these changes--a key provision in the statute and DOD policy. However, key acquisition and requirements personnel were often absent from Air Force and Navy CSB meetings when these issues were discussed. Two major defense acquisition programs--the Ballistic Missile Defense System and the Chemical Demilitarization-Assembled Chemical Weapons Alternatives programs--are not subject to the CSB provisions in statute because the statute only applies to programs overseen by military departments; the programs are managed by other DOD components. These programs are subject to DOD's CSB policy, which differs from the statute in that it only requires major defense acquisition programs that are in development to hold annual CSB reviews. Individual programs varied in the extent to which they utilized CSBs to control requirements and mitigate cost and schedule risks. According to GAO's survey results, the majority of CSB meetings neither reviewed requirement changes nor discussed options to moderate requirements or reduce the scope of programs. There were a number of specific instances where CSB meetings were effective in mitigating the effect of necessary changes, rejecting other changes, facilitating discussion of requirements, and endorsing "descoping" options with the potential to improve or preserve cost or schedule. However, in response to a survey, program officials cast some doubts about the effectiveness of CSBs, and in interviews, acquisition officials indicated that program managers may be reluctant to recommend descoping options due to cultural biases that encourage meeting warfighters' stated needs rather than achieving cost savings, a preference not to elevate decisions to higher levels of review, and concerns that future funding may be cut if potential savings are identified. In response, the Army and Air Force have issued additional descoping guidance and set savings or budget targets. The types of discussions for which CSBs were useful changed based on whether programs were in development or production. Development programs found them more useful to consider requirements changes and descoping options, and production programs found CSBs more useful to prevent changes. In an effort to further increase effectiveness and efficiency of CSBs, some of the military departments have taken steps to coordinate CSB meetings among programs that provide similar capabilities and align CSB meetings with other significant reviews. Among GAO's recommendations for DOD components are that they amend their CSB policies to be consistent with statute and align CSBs with other reviews when possible. In comments on a draft of this report, DOD concurred or partially concurred with all seven of GAO's recommendations and agreed to take action to address six of them.
Currently, the BioWatch program collaborates with more than 30 BioWatch jurisdictions throughout the nation to operate approximately 600 Gen-2 aerosol collectors. These units rely on a vacuum-based collection system that draws air through a filter. These filters are manually collected and transported to state and local public health laboratories for analysis using a process called polymerase chain reaction (PCR). Sometimes also called molecular photocopying, PCR is a technique used to amplify (or copy) segments of deoxyribonucleic acid (DNA), the building blocks of genetic material. By targeting specific segments of genetic material, PCR can be used as the basis for a test, or assay, for the presence of genetic signatures associated with specific biological organisms, such as the five BioWatch threat agents. (The program monitors for six distinct biothreat agents, but two of these are closely related, although they cause different diseases, and the BioWatch program has treated them as a single agent. For consistency, we will treat them as a single agent and report that there are five BioWatch threat agents in total.) In the BioWatch Gen-2 system, multiple PCR assays are used for each threat agent. In an initial “screening” step, one assay is run for each threat agent. If any of these assays yields a positive result, suggesting the presence of one of the threat agents, then the analysis proceeds to a “verification” step in which multiple additional assays are run targeting different genetic signatures for that agent. If the verification step also yields a positive result, then a BioWatch Actionable Result (BAR) is declared. Using this manual process, the determination of a BAR can occur from 12 to 36 hours after an agent is initially captured by the aerosol collection unit. This 36-hour timeline consists of up to 24 hours for air sampling, up to 4 hours for retrieving the sample from an aerosol collection unit and transporting it to the laboratory, and up to 8 hours for laboratory testing. Each BioWatch jurisdiction has either a BioWatch Advisory Committee or equivalent decision-making group in place, composed of public health officials, first responders, and other relevant stakeholders. The BioWatch Advisory Committee is responsible for the day-to-day BioWatch operations, including routine filter collection and laboratory analysis of filter samples. In the event of a BAR, the BioWatch Advisory Committee, in partnership with OHA and other stakeholders, is also responsible for determining whether that BAR poses a public health risk and deciding how to respond. The declaration of a BAR does not necessarily signal that a biological attack has occurred. BARs have been triggered by biological agents that occur naturally in numerous areas of the United States. From 2003 through 2014, 149 BARs were declared, but none was linked to an attack or to a public health threat. For a more detailed discussion of this issue, see appendix II. Figure 1 shows the process that local BioWatch jurisdictions are to follow when deciding how to respond to a BAR. In cooperation with other federal agencies, DHS created the BioWatch program in 2003. The goal of BioWatch is to provide early warning, detection, or recognition of a biological attack. When DHS was established in 2002, a perceived urgency to deploy useful—even if immature—technologies in the face of potentially catastrophic consequences catalyzed the rapid deployment of many technologies. In the initial deployment of BioWatch—known as Generation-1—DHS deployed aerosol collectors to 20 major metropolitan areas, known as BioWatch jurisdictions, to monitor primarily outdoor spaces. DHS completed the initial deployment quickly—within 80 days of the President’s announcement of the BioWatch program in his 2003 State of the Union Address. To accomplish this rapid deployment, DHS adapted an existing technology that was already used for other air monitoring missions. In 2005, DHS expanded BioWatch to an additional 10 jurisdictions, for a total of more than 30. The expanded deployment— referred to as Gen-2—also included the addition of indoor monitoring capabilities in three high-threat jurisdictions and provided additional capacity for events of national significance, such as major sporting events and political conventions. The technology used in Gen-1 and Gen-2 was deployed rapidly and, according to the National Academies in 2011, without sufficient testing, validation, and evaluation of its technical capabilities. To reduce the time required to detect biothreat agents, DHS began to develop autonomous detection capability in 2003 for the BioWatch program—known as Gen-3. Envisioned as a laboratory-in-a-box, the autonomous detection system would automatically collect air samples, conduct analysis to detect the presence of biothreat agents every 4 to 6 hours, and communicate the results to public health officials via an electronic network without manual intervention. By automating the analysis, DHS anticipated that detection time could be reduced to 6 hours or less, making the technology more appropriate for monitoring indoor high-occupancy facilities such as transportation nodes and enabling a more rapid response to an attack. DHS also anticipated a reduction in operational costs by eliminating the program’s daily manual sample retrieval and laboratory analysis. In 2008, DHS OHA initiated a competitive bid process for the first testing phase of the Gen-3 acquisition, known as Gen-3 Phase I. Five vendors responded to the request for proposal, and DHS awarded contracts to two, for technologies known as the Bioagent Autonomous Network Detector (BAND, later named M-BAND) and the Next Gen Automated Detection System (NG- ADS). From May 2010 through June 2011, the BioWatch program conducted Phase I testing on these candidate Gen-3 technologies. The testing goals included characterizing the state of available autonomous detection technology on the market and evaluating the candidate systems’ abilities to meet performance requirements developed by the BioWatch program. The Phase I testing consisted of testing of individual system components, such as the aerosol sampling component (the component that collects particles from the air) and the analytical subsystem (the component that detects and identifies biothreat agents), whole system chamber testing, and an operational field test in a BioWatch jurisdiction. Characterization testing did not demonstrate the system’s end-to-end ability to detect the five BioWatch threat agents in an operational environment because these agents cannot be released into the air in such environments. Expressing concern in 2011 about the rigor of DHS’s effort to help guide its Gen-3 decision making, Members of the Congress asked us to examine issues related to the Gen-3 acquisition. We released a report that evaluated the acquisition decision-making process for Gen-3 in September 2012. We recommended that before continuing the Gen-3 acquisition, DHS should carry out key acquisition steps, including reevaluating the mission need and systematically analyzing alternatives based on cost-benefit and risk information. DHS subsequently commissioned an analysis of alternatives, which was interpreted by DHS as showing that any advantages of an autonomous system over the current manual system were insufficient to justify the cost of a full technology switch. DHS’s April 24, 2014, ADM announced the cancellation of the Gen-3 acquisition and made Gen-2 the official program of record for aerosol biological threat detection. The ADM also directed S&T to explore development and maturation of an effective and affordable automated aerosol biodetection capability, or other operational enhancements, that meets the operational requirements of the BioWatch system. The capabilities of the BioWatch system can be assessed at three different levels (fig. 2). At the highest level, BioWatch consists of an array of aerosol collectors deployed in an operational environment and the associated laboratory processes for analyzing samples. The operational environment might be outdoors, as in a metropolitan area (shown in the fig.); indoors, as in an airport; or a subway or other transit system. At this level, the capability of the system to detect an attack depends on factors that include the performance characteristics of the technology (including the aerosol collector and the technology used for laboratory analysis of samples), the number and locations of the collectors, the location of an attack (that is, where a biothreat agent is released into the air), and wind patterns (for an outdoor attack). At the next level is the detector, which consists of the aerosol collector unit and the process by which samples collected by this unit are transferred to a laboratory and analyzed. Performance and effectiveness at this level depend on the technical performance characteristics of the aerosol collector itself, the extent to which the sample is preserved intact during the collection cycle and during transport to the laboratory, and the laboratory processes that are used to prepare and analyze the sample. Finally, performance can also be assessed at the level of individual components of the detector. These include (1) the aerosol collection unit, which collects and retains aerosol particles on a filter; (2) the sample recovery process, by which samples are removed from the aerosol collector and transported to a laboratory; (3) the filter extraction process, in which aerosol particles are removed from the filter and put into a liquid solution; (4) the DNA extraction process, in which DNA is extracted from the aerosol particles in liquid solution in preparation for further analysis; and (5) the PCR assays, which are used to test for the presence of specific genetic signatures of biothreat agents (as described earlier). At the highest level—an array of detectors deployed in an operational environment—measures of performance include the system’s probability of detection (Pd) for attacks of different types and sizes. The BioWatch program employs a variation on the Pd measure that is designed to assess the system’s ability to detect attacks that could cause large numbers of casualties. Because BioWatch threat agents cannot be released into the air in operational environments, the performance of an array of detectors cannot be tested directly. One method of testing that can be used to address this limitation is the use of simulants for biothreat agents. A simulant is a selected nonpathogenic organism that mimics all or some of the physical or biological characteristics of one or more pathogenic agents. Another method that has been used for BioWatch involves computer modeling and simulation of attack scenarios. At the level of a single detector, key measures of system performance include limits of detection, probability of detection, and specificity. The limits of detection are the lowest aerosol concentrations at which the system can detect the presence of a biothreat agent with a defined level of reliability. Probability of detection is the likelihood that the system will correctly detect the presence of a biothreat agent when it is present at a given aerosol concentration in the immediate vicinity of the detector. Specificity is the probability that the system will correctly yield a negative result when a biothreat agent is not present (that is, the probability that the system will not generate a “false positive”). The term specificity may also refer to the system’s ability to distinguish a biothreat agent from other, similar organisms. At this single detector level, limits of detection and probability of detection reflect the sensitivity of the system and help determine the ability of an array of detectors in an operational environment to detect attacks; all else being equal, a more sensitive system will have greater ability to detect attacks. (Note, however, that the sensitivity of a single detector reflects its ability to detect an aerosol at the location where the detector is placed; additional analysis must be done to say what this means for the ability of an array of detectors in different locations to detect attacks of defined types and sizes.) The specificity of the system may contribute to the confidence that stakeholders and decision makers have in a positive result; a system with higher specificity is less likely to generate false positives, so users can have greater confidence in a positive result. Detector-level performance is assessed primarily through testing. Such tests may be conducted in laboratory chambers or in open air and may involve live biothreat agents or simulants. End-to-end tests using live agents are currently not possible for the BioWatch system, as the BioWatch threat agents cannot be released in open-air environments, and at present there is no indoor chamber in which testing the Gen-2 system with live agents is technically feasible. Consequently, end-to-end tests of BioWatch must rely on simulants, which may be inactivated, or killed, forms of the same agents that the system is designed to detect. Alternatively, simulants may be related organisms, either live or killed. Testing a biodetection system outdoors with killed related organisms presents the most realistic opportunity to evaluate performance in an operationally representative environment. Individual components of a detection system, such as the aerosol collector or the assay component, may be tested under strictly defined conditions. This type of testing could support comparisons of components with live agents and simulants and provides for a tight control of test conditions and variables for a robust characterization of components. However, testing components in a laboratory or chamber setting typically excludes some factors that might affect system performance in an operational environment, such as meteorological factors and materials in the air that might interfere with system performance (called interferents). Important meteorological factors include relative humidity, temperature, and solar irradiance; important interferents include pollutants (e.g., nitrates or carbon monoxide), as well as smoke and dust, all of which can influence the performance of a biodetection system. Also, the operational environment is difficult to reproduce in a biological containment chamber in terms of the aerosol concentration, particle size distribution, aging of the agent, and dispersion dynamics. The metrics used for components of a system depend on the component being tested. For example, for an aerosol collection component, they might include efficiency (i.e., the percentage of aerosol particles successfully collected on the filter and retained intact for subsequent analysis). For laboratory tests or assays, other metrics are used, including sensitivity, limits of detection, and specificity, as well as the efficiency of the processes by which samples are removed from the filter and prepared for analysis. Concerned with the threat of bioterrorism, in 2004, the White House released Homeland Security Presidential Directive 10 (HSPD-10), which outlined four pillars of the biodefense enterprise and discussed various federal efforts and responsibilities to help support it. The biodefense enterprise is the whole combination of systems at every level of government and the private sector that can contribute to protecting the nation and its citizens from potentially catastrophic effects of a biological event. It is composed of a complex collection of federal, state, local, tribal, territorial, and private resources, programs, and initiatives, designed for different purposes and dedicated to mitigating various risks, both natural and intentional. The four pillars of biodefense outlined in HSPD-10 were (1) threat awareness, (2) prevention and protection, (3) surveillance and detection, and (4) response and recovery. The BioWatch program falls under the surveillance and detection pillar, as an example of an environmental monitoring activity. Biosurveillance also includes disease monitoring and reporting to protect humans, animals, and plants from potentially catastrophic effects of intentional or natural biological events. However, in 2011, the National Academies evaluation of BioWatch noted considerable uncertainty about the likelihood and magnitude of a biological attack, and about how the risk of a release of an aerosolized pathogen compares with risks from other potential forms of terrorism or from natural diseases. BioWatch was deployed rapidly to meet a perceived need for a system to detect catastrophic attacks. More recently, as we reported in 2012, OHA officials told us they use the Bioterrorism Risk Assessment (BTRA) to inform BioWatch because it is the most relevant risk assessment available to them and because it allows OHA to focus BioWatch detection efforts on the biological agents of significant concern. However, in 2008, the National Academies raised concerns about the methods used to develop the BTRA, particularly the methods used to assess the probability of an attack. The last full BTRA was issued in 2010 and did not address all the recommendations made by the National Academies. The National Academies’ evaluation of BioWatch in 2011 also stated that to achieve its health protection goals, the BioWatch system should be better linked to a broader and more effective national biosurveillance framework that will help provide state and local public health authorities, in collaboration with the health care system, with the information they need to determine the appropriate response to a possible or confirmed attack or disease outbreak. In our earlier work, we highlighted the uncertainty about the incremental risk-mitigating benefit of the kind of environmental monitoring offered by BioWatch because of its relatively limited scope and the challenges agencies face in making investment decisions. In our June 2010 report on federal biosurveillance efforts, we recommended the Homeland Security Council direct the National Security Staff to identify a focal point to lead the development of a national biosurveillance strategy. We made this recommendation because we recognized the difficulty that decision makers and program managers in individual federal agencies face prioritizing resources to help ensure a coherent effort across a vast and dispersed interagency, intergovernmental, and intersectoral network. Therefore, we called for a strategy that would, among other things, (1) define the scope and purpose of a national capability; (2) provide goals, objectives and activities, priorities, milestones, and performance measures; and (3) assess the costs and benefits and identify resource and investment needs, including investment priorities. In July 2012, the White House released the National Strategy for Biosurveillance to describe the U.S. government’s approach to strengthening biosurveillance, but it did not fully meet the intent of our prior recommendations, because it did not offer a mechanism to identify resource and investment needs, including investment priorities among various biosurveillance efforts. Further, in 2005, we reported that because the nation cannot afford to protect everything against all threats, choices must be made about protection priorities given the risk and how to best allocate available resources. The strategic implementation plan has not been publicly released, but according to the strategy, it will include specific actions and activity scope, designated roles and responsibilities, and a mechanism for evaluating progress. However, it is too soon to tell what effect, if any, it may have on determining resource allocation priorities across the agencies. DHS lacks reliable information about BioWatch Gen-2’s technical capabilities to detect a biological attack and therefore lacks the basis for informed cost-benefit decisions about possible upgrades or enhancements to the system. In order to assess Gen-2’s capability to detect a biological attack, DHS would have to link test results to its conclusions about the ability of arrays of deployed detectors to detect attacks in BioWatch operational environments. This would ordinarily be done by developing and validating technical performance requirements based on operational objectives, but DHS has not developed such requirements for Gen-2. In the absence of technical performance requirements, DHS officials said their assertion that the system can detect catastrophic attacks is supported by modeling and simulation studies. However, these studies have not directly and comprehensively assessed the capabilities of the Gen-2 system. Furthermore, in our review of the tests that have been conducted, we found there are limitations and uncertainties in the test results on the technical performance characteristics of the Gen-2 system. DHS commissioned four key tests of Gen-2’s technical performance characteristics, but has not developed and validated performance requirements that would enable it to interpret the test results and draw conclusions about the ability of an array of detectors in an operational environment to detect attacks. One test focused on the sensitivity of the whole system (that is, the aerosol collection unit and subsequent laboratory analysis of samples collected by that unit), while others focused on components of the system (table 1). None of these four tests focused on the highest level that we identified earlier—that is, an array of detectors placed in different locations. The four tests are described briefly below. Dugway Proving Ground conducted a test of the sensitivity of the whole system, from the unit that collects aerosol particles on a filter through the analysis that looks for genetic material from biothreat agents. This test was designed to assess the system’s ability to detect aerosols of different concentrations, and it produced estimates of the system’s limits of detection (that is, the lowest aerosol concentrations that the system could detect with defined levels of reliability). Dugway also conducted a test of the efficiencies of particular components of the Gen-2 system—in particular, the filter wash component (where aerosol particles are recovered from the filter into a liquid solution) and the DNA extraction component (where genetic material is extracted from the aerosol particles for further analysis). Each of these components influences the overall sensitivity of the Gen-2 system. Edgewood Chemical Biological Center conducted a test of the aerosol collection component of the system by aerosolizing particles and measuring the system’s efficiency at trapping these particles on the filter and transferring them into liquid solution, another component whose performance influences the overall sensitivity of the system. Los Alamos National Laboratory conducted tests of the PCR assays, which included measuring the assays’ sensitivity (their ability to detect different amounts of genetic material from the BioWatch threat agents), as well as their specificity (their ability to detect various strains of the BioWatch threat agents while correctly “ignoring” genetic material from other agents and interfering substances and materials commonly found in the environment). In addition to these four key tests, DHS commissioned a demonstration of the system in an outdoor environment and conducts quality assurance tests on an ongoing basis. Both of these provide additional information about the system’s capabilities; however, we do not include them in our list of key tests because neither was designed to produce estimates of key performance characteristics, including sensitivity, or to support conclusions about the types and sizes of attack the system can reliably detect. The outdoor demonstration, performed by the Naval Surface Warfare Center Dahlgren Division, involved releasing a simulant for one of the BioWatch threat agents and showed that the Gen-2 technology could successfully detect this simulant in an open-air environment. However, aerosol concentrations were not varied systematically and measured independently in such a way as to produce statistical estimates of the system’s sensitivity. Additionally, ongoing quality assurance tests of the laboratory component of the Gen-2 system include testing filters that (1) contain potential interferents from BioWatch operational environments and (2) have been “spiked” with samples of killed biothreat agents, to verify that the system correctly detects these agents. However, these tests challenge the system with just one concentration of agent on the filter and therefore do not involve the systematic variation in concentration that is required to produce statistical estimates of the system’s sensitivity. Rather than estimating the system’s performance characteristics, these quality assurance tests are designed to provide confidence that system performance meets or exceeds benchmarks based on past system performance. Under both DHS guidance and standard practice in testing and evaluation of defense systems, test results would be compared with predefined technical performance requirements. Those requirements would specify the technical performance parameters that a system must achieve in order to meet its operational objectives. In other words, requirements would provide targets against which test results can be evaluated in order to assess whether the system will reliably achieve its intended purpose. Technical performance requirements for BioWatch could include the limits of detection and probability of detection that a detector needs in order for a deployed array of detectors to reliably detect attacks of particular types and sizes. While DHS has commissioned some testing of the system’s performance characteristics, officials told us they have not developed technical performance requirements, which would enable them to interpret the test results and draw conclusions about the system’s ability to meet its operational objective. DHS officials told us that the system’s operational objective is to detect catastrophic attacks, which they define as attacks large enough to cause 10,000 casualties, and they stated that the system is able to meet this objective. However, as we have previously reported, the BioWatch system was deployed quickly in 2003 to address a perceived urgent need; it was deployed without performance requirements and, as the National Academies has reported, without sufficient testing. In keeping with Office of Management and Budget (OMB) guidance on making decisions about federal programs, decisions about upgrades for BioWatch will require comprehensive information about the benefits and costs associated with the current system, including its capability to meet its operational objective. However, DHS officials told us that in the 12 years since BioWatch’s initial deployment, they have not developed technical performance requirements against which to measure the system’s ability to meet its objective. Nevertheless, DHS has already taken steps to pursue enhancements to the Gen-2 system. Because DHS lacks targets for the current system’s performance characteristics, including limits of detection, that would enable conclusions about the system’s ability to detect attacks of defined types and sizes with specified probabilities, it also cannot ensure it has complete information to make decisions about upgrades or enhancements. In the absence of technical performance requirements for Gen-2, DHS officials said they have used modeling and simulation studies, commissioned from multiple national laboratories, to link test results to conclusions about the system’s ability to detect attacks. In particular, they said that these modeling and simulation studies support their assertion that the Gen-2 system can detect catastrophic attacks, defined as attacks large enough to cause 10,000 casualties. However, while DHS officials provided reports to illustrate the modeling and simulation work that has been done, none of the studies that were provided or described to us incorporated specific test results, accounted for uncertainties in those results, and drew specific conclusions about the Gen-2 system’s ability to achieve the defined operational objective. Further, according to officials, DHS has not prepared an analysis of its own that combines the modeling and simulation studies with the specific Gen-2 test results to demonstrate DHS’s assertions about the system’s capabilities to detect attacks of defined types and sizes. The modeling and simulation studies were designed for purposes other than to directly and comprehensively assess Gen-2’s operational capabilities. For example, one set of modeling and simulation studies, conducted by Sandia National Laboratories (Sandia) in collaboration with other national laboratories, was designed to predict the capabilities of hypothetical biodetection systems (similar to BioWatch) with different performance characteristics and deployed in different ways. For instance, these studies, which Sandia researchers called trade-space studies, assessed possible trade-offs in deploying fewer detectors with higher sensitivity or deploying more detectors with lower sensitivity. Sandia constructed models of hypothetical biodetection systems and then analyzed how these hypothetical systems would respond to simulated attacks of different sizes, using different agents, in different locations, and under different conditions (e.g., outdoor attacks with different wind speeds and directions, which affect how an aerosol disperses over an area). Because the goal was to assess hypothetical biodetection systems, Sandia analyzed ranges of hypothetical system sensitivities rather than incorporating the results of the four key tests of the performance characteristics of Gen-2. These studies drew no conclusions about the actual capabilities of the deployed Gen-2 system. Further, the trade-space studies did not incorporate information about the actual locations of Gen-2 collector units. Rather, these studies were designed to model hypothetical BioWatch deployments in which collectors were placed in optimal locations. If the Gen-2 collectors were not actually placed in these optimal locations, then model results might not accurately describe the capabilities of the system as currently deployed. In addition to the trade-space studies, DHS officials described modeling and simulation work they commissioned for the purpose of selecting sites for Gen-2 collector units; however, this work also had limitations that prevent specific conclusions about the Gen-2 system’s operational capabilities. Unlike the trade-space studies, the collector-siting analyses do include a test result that is meant to describe the sensitivity of the Gen-2 system. However, the test result used in this work was for just one of the five BioWatch threat agents, as decisions about collector siting are based on just this one agent. Consequently, these collector-siting analyses contain no information about the system’s capabilities to detect attacks using any of the other four BioWatch threat agents. Further, the test result used in this work was not from the four key tests of Gen-2 described earlier, but from an older test from 2004. An internal DHS analysis in 2013 noted that there were differences between the system tested in 2004 and the currently deployed system that limit the ability to draw conclusions from the 2004 results. Finally, the collector-siting studies use a measure of operational capability that does not directly support conclusions about the BioWatch objective of detecting attacks large enough to cause 10,000 casualties. In general, these studies use a measure called fraction of population protected, or Fp. Roughly speaking, Fp represents a system’s probability of successfully detecting simulated attacks, but calculated in a way that gives more weight to attacks that infect more people and less weight to attacks that infect fewer people. We believe this metric does not directly support conclusions about the system’s ability to detect attacks causing more than 10,000 casualties. Such conclusions would be supported by another metric that has been used by Sandia but is not preferred by the BioWatch program: the probability of detection (Pd) for attacks causing more than 10,000 casualties. DHS officials told us that they use Fp because BioWatch has a public health mission and so the system should be assessed in a way that reflects its ability to detect attacks that infect more people. However, Pd for attacks causing more than 10,000 casualties also incorporates public health impact; unlike Fp, it could directly support conclusions about the BioWatch operational objective, and, as noted in a Sandia report, is straightforward to communicate. Sandia officials told us that Fp has certain strengths and is appropriate for certain purposes. However, because the collector-siting studies focus on Fp, their results are not straightforward to communicate and do not support conclusions that align directly with the BioWatch operational objective. Finally, because none of the modeling and simulation work was designed to interpret Gen-2 test results and comprehensively assess the capabilities of the Gen-2 system, none of these studies has provided a full accounting of statistical and other uncertainties—meaning decision makers have no means of understanding the precision or confidence in what is known about system capabilities. Best practices in risk analysis and cost-benefit analysis require an explicit accounting of uncertainties so that decision makers can grasp the reliability of, and precision in, estimates to be used for decision making. Estimates of the Gen-2 system’s limits of detection, produced by the four key tests described earlier, contain multiple sources of uncertainty, which we describe in the next section of this report. None of the modeling and simulation studies that were provided or described to us incorporated information about the uncertainties associated with estimates of the system’s limits of detection. We also found that these studies did not account for uncertainty in some model inputs and assumptions, including estimates of how infectious each of the BioWatch threat agents is and how quickly each agent decays after it is released in the air. For example, Sandia researchers and a subject matter expert told us that there is considerable uncertainty in even the best available estimates of the infectious dose of anthrax, as these estimates are based on data from nonhuman primates. In an earlier study, Sandia researchers and others reported that “gaps in our knowledge of the correct dose-response relationship significantly limit our ability to predict the outcome of outdoor anthrax attacks.” For many of the assumptions the Sandia models used, researchers dealt with uncertainty by using not just single estimates but ranges of estimates. However, for the infectious doses of the five BioWatch threat agents, researchers used single estimates that DHS provided. The uncertainty in these estimates is important because DHS officials have characterized the operational objective of the BioWatch system as detecting attacks large enough to cause 10,000 casualties. In order to assess the system’s capability to achieve this objective, DHS must be able to correctly define the types and sizes of attack that fall into this category. If anthrax were less infectious than the models assumed, then DHS would be underestimating the system’s ability to detect catastrophic anthrax attacks. Conversely, if anthrax were more infectious than the models assumed, then DHS would be overestimating the system’s ability to detect such attacks. We recognize that more precise infectious dose estimates may not exist, but this underscores the uncertainty in the ability of the BioWatch system to meet its operational objective—uncertainty that should be articulated to better inform decision makers about the capabilities of the Gen-2 system and inform cost-benefit decisions about any possible enhancements to the system. We found limitations and uncertainties in the four key tests of the Gen-2 system’s performance characteristics—in particular, in the use of test chambers instead of operational environments and the use of simulants in place of live biothreat agents. As noted earlier, it is not possible to test the BioWatch system directly by releasing live biothreat agents into the air in operational environments. Because of this constraint, which is beyond DHS’s control, the agency commissioned tests that involved aerosols in test chambers or were limited to components of the system for which aerosols were not necessary. Further, officials and experts told us there are no test chambers where testing the Gen-2 system with live agent is technically feasible; thus some tests have involved simulants in place of live biothreat agents. Using laboratory chambers and simulants effectively addressed certain challenges, but both introduced uncertainties into testing results. Chambers often differ from operational environments in ways that can affect a system’s performance. For example, chamber environments are generally not designed to be representative of operational environments in such factors as air temperature, humidity, and, according to an expert, the presence of potential interferents in the air. Similarly, simulants may not mimic the biothreat agents that the system is designed to detect in all of the ways that matter for system performance; therefore, the system might perform differently when presented with the target biothreat agents than when tested with simulants. As a result, chambers and simulants create uncertainty as to whether test results accurately describe how the system would perform in an operational environment against live BioWatch threat agents, and this uncertainty should be clearly articulated for decision makers. Additionally, while one of the four tests assessed the performance of the whole Gen-2 system, the three other tests were limited by their focus on components of the system, including (1) the aerosol collection component; (2) the filter wash process, in which aerosol particles are transferred from the filter into a liquid solution; (3) the DNA extraction process, in which genetic material is extracted from the particles in liquid solution; and (4) the analytical component, in which PCR assays are used to detect genetic signatures of the BioWatch threat agents. According to a National Research Council (NRC) committee, it is uncertain whether test results from individual components of a biodetection system will accurately reflect the performance of the whole system. An expert told us that components may perform differently when combined than when tested separately, and parts of samples may be lost during transitions from one component to another in ways that affect the end-to-end performance of the system. DHS took steps to mitigate the limitations associated with not testing the Gen-2 system in an operational environment with live biothreat agents, but these limitations could not be eliminated entirely. For example, to address the fact that killed agents might not perfectly mimic live biothreat agents, the Dugway tests included a direct comparison of live and killed agents, but this could be done only for the analytical component of the system (that is, the PCR assays). The Edgewood Chemical Biological Center test of the aerosol collection component included variations in temperature and humidity. This somewhat mitigated the fact that chambers may not be representative of operational environments; however, only a small number of combinations of temperature and humidity were tested, and an expert told us other characteristics that might differ between chambers and operational environments were not varied. The Los Alamos National Laboratory tests of the PCR assays included testing the assays with a set of environmental organisms and substances. However, this test was limited to the specific organisms and substances used, and results do not generalize to other organisms and substances that might occur in BioWatch operational environments. In sum, although the key tests of the Gen-2 system took steps to mitigate limitations, uncertainties remain, and test results constitute only limited measures of key system performance parameters in an operational environment. While challenges associated with testing a system like BioWatch make some limitations unavoidable, according to experts and agency officials, we found that some limitations could likely have been mitigated. In 2004, an NRC committee proposed a framework for testing biodetection systems that was designed to minimize the uncertainties associated with laboratory chambers and simulants. In this framework, both the whole system and its components are systematically tested with simulants and, where possible, live biothreat agents. This is done in both laboratory chambers and environments that are more representative of operational environments where the system will be deployed. Importantly, this framework entails an integrated, systematic approach to testing in which some factors (e.g., agents, simulants, and environmental factors) are held constant while others are varied. The committee recommended focusing on a certain category of simulants, known as killed related strains, that have potential to mimic live biothreat agents while also enabling more realistic and frequent testing. The overall goal of the NRC framework is to home in on the true performance of the system when challenged with live agents in an operational environment—and, in so doing, to reduce the risk that the system will perform differently in the real world than it did during testing. DHS has not systematically tested the Gen-2 system under the most realistic possible conditions. Although DHS officials said they based their approach to testing Gen-2 on the NRC framework, we found that the Dugway test of system sensitivity did not incorporate killed related strains as simulants, as recommended by the NRC. Killed related strains would offer greater flexibility for use in more operationally representative environments. Furthermore, the Dugway test did not attempt to incorporate potential environmental interferents. As noted earlier, DHS also commissioned a demonstration that the Gen-2 technology could detect a simulant in an outdoor environment. However, aerosol concentrations were not varied systematically and measured independently in such a way as to produce statistical estimates of the system’s sensitivity. Furthermore, this open-air demonstration involved a simulant for just one of the five BioWatch threat agents, and, unlike the killed related strains recommended by the NRC, this simulant required that the system be modified to use a different PCR assay than is used for the actual threat agent. A GAO subject matter expert on outdoor testing of biodetection systems with simulants assessed the Dahlgren trials to be deficient in the equipment that was used to characterize the aerosols; if alternative equipment known as slit-to-agar samplers had been used, they could have provided more useful information on aerosol concentrations and exposure times. According to this expert, additional problems were associated with an inadequate dissemination system and inadvisable testing at wind speeds below 2.0 meters/second. In general, DHS’s understanding of the performance characteristics of the current system would benefit from a more systematic approach to testing under more realistic conditions. In the next year, some Gen-2 equipment will reach the end of its lifecycle, and DHS will need to make decisions about reinvesting in the program. Further, DHS officials told us they are considering potential improvements or upgrades to the Gen-2 system. Based on OMB guidance, cost-benefit decisions about investing in new equipment and in potential system improvements will require information about current operational capabilities. However, because of the limitations we have identified, decision makers are not assured of having sufficient information to ensure future investments are actually addressing a capability gap not met by the current system. As noted earlier, some limitations in testing are unavoidable given the nature and purpose of the BioWatch system. Likewise, some limitations and uncertainties are unavoidable in the modeling and simulation work that DHS has commissioned to link test results to operational capabilities (e.g., the uncertainty in infectious dose estimates for the BioWatch threat agents). These limitations underscore the need for a full accounting of statistical and other uncertainties, without which decision makers lack an understanding of the precision in what is known about the system’s capability to detect attacks of defined types and sizes and cannot make informed decisions about possible upgrades to Gen-2. In 2013, in collaboration with the National Academies, we identified eight best practices for developmental testing of threat detection systems. When comparing DHS’s actions and decisions regarding the planned acquisition and testing of Gen-3, we found that DHS’s actions partially aligned with these best practices. We also identified several lessons DHS could learn by applying these practices more systematically to improve future testing and acquisition efforts—for example, testing of possible upgrades or enhancements to Gen-2. We also highlight the testing of other DHS acquisitions that faced challenges. Role of Developmental Testing Developmental testing is intended to assist in identifying system performance, capabilities, limitations, and safety issues to help reduce design and programmatic risks. According to experts recruited in coordination with the National Academies, the best practices they identified apply to the process of developmental testing of binary threat detection systems; they also apply if the tested system is commercial-off-the-shelf (COTS), modified COTS, or newly developed for a specific threat detection purpose by a vendor or the government. In previous work, in collaboration with the National Academies, we recruited experts to develop best practices for developmental testing of binary threat detection systems. According to the experts, the best practices apply if the system is commercial-off-the-shelf (COTS), modified COTS, or newly developed for a specific threat detection purpose by a vendor or the government. In discussing the role of testing early in an acquisition, officials from S&T’s Office of Operational Test and Evaluation (OT&E) said programs face a significant challenge in acquiring COTS products, like the BioWatch Gen-3 acquisition, because these products or systems are not designed to operate or initially tested with the same requirements needed by the federal government. Typically, OT&E officials said, program officials underestimate the testing needed to acquire a COTS product for their mission, but said testing is needed to ensure the COTS solution is reliable, scalable, and secure. We reported in 2012 that according to BioWatch program officials, developing autonomous biological detection had proved challenging, in part because some of the technology required was novel, but also because even the existing technologies—for example, the aerosol collection unit and the apparatus that reads the PCR results—had not been combined for this specific application in an operational environment before. S&T OT&E officials reflecting on the acquisition said Gen-3 Phase I testing became developmental in nature, with additional steps built into the test design to ensure the technology could progress to the next level, particularly to ensure assay detection met the program’s requirements. We consider the best practices for developmental testing applicable to Phase I of the Gen-3 acquisition because they could have helped identify challenges early in the acquisition process. Additionally, applying the best practices and lessons learned during the Gen-3 testing could help mitigate the risk that DHS acquires immature technology as part of its effort to make enhancements to Gen-2. Appendix I has more information on our methodology for developing the best practices, and appendix III has a description of each best practice. Our analysis of DHS’s alignment with the eight best practices for developmental testing of Gen-3 follows below and is summarized in table 2. DHS’s Actions Partially Aligned with Best Practice 1 DHS took some risk-mitigating steps during Gen-3 Phase I, but did not conduct a full risk assessment at the outset of the acquisition. According to program officials, Phase I of the overall Gen-3 acquisition was itself a risk mitigation activity, designed to assess the capability of industry to provide mature autonomous detection solutions before committing to a rapid and extensive program to procure and field operational systems. For example, according to BioWatch program officials, DHS conducted market research to assess whether solutions potentially capable of meeting DHS performance and maturity (and therefore schedule) requirements existed. In an effort to ensure accountability, OHA held a “Demonstration Day” early in the Phase I source selection process, where vendors participated in tests designed to confirm their technical maturity claims. This reduced (but did not eliminate) the risk of awarding Phase I contracts to a vendor not capable of completing the tests planned under Phase I. Additionally, officials said OHA initially planned to conduct all testing in parallel. However, because of schedule slips associated with contract issues, program officials ended up scheduling the tests incrementally to allow for the insertion of decision points on whether testing should continue. This allowed program officials to engage in the testing and evaluate test results to help reduce technical and financial risk. As a result of this multi-stage Phase I testing approach, DHS identified limitations to one vendor’s detection system that could not be overcome before proceeding with the next stages of testing. DHS Could Apply Lessons Learned An earlier evaluation of risk may have eliminated difficulties with cost and schedule estimates of the Gen-3 acquisition that in part led to its cancellation in April 2014. According to this best practice, risk needs to be assessed when the system is COTS, modified COTS, or a newly developed system, because even with commercial items, significant modifications may be needed. DHS ultimately performed a formal risk assessment of the Gen-3 acquisition but not until after Phase I testing ended. The absence of a risk assessment at the start of the acquisition led to challenges that the acquisition could not overcome because of its inflexibility. In 2012, we reported that DHS did not fully engage in the early phases of its acquisition framework to ensure that the acquisition pursued an optimal solution in the context of its costs, benefits, and risks. Our prior work has found that stable parameters for performance, cost, and schedule are among the factors that are important for successfully delivering capabilities within cost and schedule expectations. Our work has also found that without the development, review, and approval of key acquisition documents, agencies are at risk of having poorly defined requirements that can negatively affect program performance and contribute to increased costs. For example, despite having limited assurance that the acquisition would successfully deliver the intended capability within cost and on schedule, the Deputy Secretary approved the initial stages of the acquisition. DHS’s Post Implementation Review Report, which lays out lessons learned on the Gen-3 acquisition, states that the Phase I testing demonstrated that schedule risk analyses should be used to set realistic test and evaluation schedule expectations. According to program officials, they set the acquisition schedule estimate aggressively because there was pressure to quickly deploy an autonomous detection capability. However, schedule revisions were needed because of significant changes in performance, deployment schedule, and cost expectations as a result of the Phase I testing. By not engaging in the initial steps of the acquisition framework to effectively account for risks early in the acquisition, DHS did not demonstrate full accountability and exceeded its cost and timeframe estimates. As a result of the Phase I testing, and on the basis of outside reviews of the Gen-3 acquisition, DHS directed OHA to conduct a more robust analysis of alternatives that included assessing risk. This led to the cancellation of the Gen-3 acquisition in April 2014. The identification of potential risks, and strategies to overcome these risks, helps ensure accountability on the part of the agency and may alleviate problems with the acquisition of threat detection systems if they are part of the early planning for testing. As DHS considers upgrades to the currently deployed Gen-2 BioWatch system or considers the acquisition of new detection technologies, early identification of risks may help better guide DHS by identifying areas for enhanced engagement that may be needed during the testing and to help ensure proper accountability for decision making during the acquisition. Acquisition Decision Event (ADE) 2A An acquisition decision event, where the Acquisition Review Board—a cross- component board of senior DHS officials— determines whether a proposed acquisition has met the requirements of the relevant Acquisition Life-Cycle Framework phase and is able to proceed. ADE-2A is the culminating event for the Analyze/Select phase of the DHS acquisition framework, where DHS determines whether to authorize the acquisition to proceed to the Obtain phase, where testing and evaluation occur. DHS did not sufficiently involve the end user community in the development of Gen-3 system requirements or parts of Phase I testing; rather it relied on internal subject matter experts to develop requirements. As we reported in 2012, the process used to set the sensitivity requirement did not reflect stakeholder consensus about how to balance mission needs with technological capabilities. Specifically, the BioWatch program did not prepare a concept of operations (CONOPS) before ADE-2A. According to DHS acquisitions guidance, in developing a concept of operations, stakeholders engage in a consensus-building process regarding how to balance technological capabilities with mission needs in order to gain consensus on the use, capabilities, and benefits of a system. For BioWatch, this could include specifying the level of population protection the system should provide and then specifying the sensitivity levels needed to provide that level of protection. According to OHA officials, the high-level capability gaps documented in the mission need statement (timeliness, population coverage, time resolution, and cost-effectiveness) were representative of feedback from this user community with respect to improvements needed on the Gen-2 system, particularly for indoor deployment. Therefore, officials said they did not directly involve jurisdictional public health stakeholders in establishing the technical requirements, including sensitivity requirements, for Gen-3. Role of Subject Matter Experts (SMEs) in Developmental Testing Distinct from the user community, SMEs provide independent technical advice and monitor the status of developmental testing to help ensure tests are conducted and analyzed properly—a practice the National Academies has supported for BioWatch. SMEs may include test designers, engineers, and statisticians. For a program like BioWatch, SMEs may have expertise in epidemiology, environmental health, public health laboratory systems, infectious diseases, genetics, and detection technology, among other disciplines. Department of Homeland Security (DHS) acquisitions policy includes guidance on using independent technical advisors as part of the test and evaluation process. For example, an operational test agent (OTA) is an independent entity that supports development of the test and evaluation master plan (TEMP) and monitors developmental testing in order to understand system performance early and determine how to execute integrated developmental and operational testing. The OTA presents objective and unbiased conclusions in reports and test readiness reviews. DHS guidance also describes the role of the Integrated Process Team, which is composed of representatives from program leadership, stakeholders, and SMEs involved in testing activities. OHA relied on in-house experts and other department officials who they said had the expertise to convert the high-level mission needs into detailed technical performance requirements. These included requirements for system sensitivity, time needed to detect an attack, and the probability of false positives, among other things. According to OHA officials, jurisdictions (especially the four that were selected for OT&E) were kept informed through meetings with the program office, independent test agencies, and the two competing vendors in the Phase I testing. OHA officials also said they conducted numerous updates at events such as the National BioWatch Stakeholders Workshop, webinars, and invitations to all jurisdictions to submit questions to the Gen-3 or BioWatch program managers. According to OHA officials, the end user community was also invited to observe testing at two special testing events during Phase I. However, informing end users of the status of testing is not the same as including them in the development of requirements and testing. If DHS had involved end users as part of the testing process, it could have shed additional light on the potential challenges for end users to operate the technology being considered. For example, an official who tested the NG-ADS system described months- long training that was required to understand the systems being tested and evaluated. More closely involving the end users in the testing may have revealed additional end user views on their ability and willingness to use the equipment, given its complexity. Additionally, in December 2010, the Undersecretary for Management issued an ADM that, among other things, highlighted the continued need to develop a CONOPS, citing significant risk to the program because of the high-level coordination required for acting upon detection information produced by the autonomous detection system and that there was insufficient detail in the program documentation describing the necessary coordination process among end users and other stakeholders. DHS Could Apply Lessons Learned DHS recognized in its Post Implementation Review that stakeholders and end users need to be involved earlier in the acquisition process, including validating advanced detection systems and methods before they are fielded. To better understand the needs, concerns, and capabilities of the user community, in the future, DHS could take steps to engage with stakeholders early on in the development process. As DHS considers upgrades to the current Gen-2 system, DHS should, in accordance with DHS acquisition guidance, prepare a concept of operations and ensure end users are engaged throughout the testing of upgrades or enhancements to the Gen-2 system or new acquisitions for BioWatch. DHS’s Actions Partially Aligned with Best Practice 3 The systems engineering approach was outlined in the Gen-3 test and evaluation master plan (TEMP), which was generally clear in defining the boundaries of the system to be tested. The tests were for the most part appropriately scoped given the systems engineering view that was taken. However, testing revealed that evaluation of these detection systems may benefit from more robust testing methods, particularly to test performance against environmental contaminants. The primary purpose of the Phase I testing was characterizing the systems’ performance through a series of tests that included an aerosol collection subsystem test, evaluation of assays, an analytical subsystem test, a system chamber test, and a field test. The TEMP clearly identified these test boundaries and stated that all technologies must meet the five key performance parameters (KPP) during Phase I testing before selection would be made and Phase II testing (operational testing) was initiated. KPPs included detection of the biological agents, system sensitivity, the time to detect, achieved availability (for example, the probability the detector will operate under normal conditions), and the probability of a false positive. The TEMP recognized inherent limitations to the systems engineering testing for the Gen-3 system. For example, whereas the environmental conditions under which the Gen-3 system must operate were outlined in the TEMP, no chamber yet exists in which these requirements can be fully tested. In addition, according to DHS officials, legal constraints, public safety, and public perception limit the type of material that can be aerosolized in a realistic setting for test purposes. The TEMP outlined the systems engineering approach for Gen-3 testing by articulating the major issues that needed to be addressed in testing the system, including the key performance parameters, and accounted for the limitations given constraints on the type of testing that can be done. DHS Could Apply Lessons Learned Although DHS clearly articulated the boundaries of the testing for Phase I and took steps to test the autonomous detectors against likely environmental contaminants that might interfere with detection, the systems engineering approach could have benefited from more robust and comprehensive testing methods. For example, tests at Dugway also attempted to account for the possibility that environmental pollutants might interfere with the performance of the PCR assays by placing samples in liquid solutions taken from actual BioWatch filters retrieved from operational environments. However, researchers told us they were unaware of which operational environments the filter washes had come from, and there was no sub-analysis by type of environment (e.g., outdoor versus indoor versus subway), which raises the possibility that pollutants from certain environments may not have been represented in the test or that pollutants from certain environments may have been diluted with filter wash from other environments. The pooled filter wash was used to test the Gen-3 analytical process in order to look for possible inhibition (e.g., interference) of the PCR assays. Officials said that during the Gen-3 testing, the pooled filter wash used to test the analytical process never showed interference in acquiring a result to an extent that would have required additional testing steps. However, in the final Phase I test, DHS fielded detectors in Chicago to demonstrate the performance of the candidate technology’s full system in a representative environment. Results of the testing showed that the candidate system’s performance was inconsistent in different operational environments. Specifically, detectors located on underground subway platforms had higher incidences of malfunction than detectors in other locations. These malfunctions may be associated with the presence of metallic brake dust, which demonstrates that different operational environments pose different challenges. Additional rigor in the testing design could have identified limitations earlier and perhaps mitigated them prior to the field testing in Chicago. In fact, the final report by the National Assessment Group on the operational assessment of the Gen-3 Phase I testing concluded that in retrospect, and based on the outcome of testing in the Chicago field test, levels of some possible environmental inhibitors, such as metallic brake dust, represented in the Dugway testing were significantly diluted or did not represent a concentration that compared with concentrations in some of the more problematic operating environments. Therefore, a more robust systems engineering approach to testing contaminants may have helped identify challenges like this earlier. The BioWatch program office agreed that the limited environmental contaminant testing was insufficient to draw any conclusions about system performance on this issue. According to program office officials, the program planned to implement robust interferent tests during Phase II to characterize performance against all of the operational requirements and said Phase I was only meant to characterize the state of the market for autonomous detection systems. However, the TEMP for Gen-3 states that the goal of Phase I testing was to evaluate the ability of the candidate systems to meet performance requirements specified in an operational requirements document (ORD). The ORD for Gen-3 specifically outlines the indoor and outdoor environmental conditions under which the system is expected to operate, including, but not limited to, exposure to dust, metallic dust, diesel exhaust, pollen, rain, snow, ice, wind, salt spray, as well as ranges in temperature, humidity, and altitude. DHS’s plan to defer more robust testing of conditions representative of Gen-3’s intended operational environment does not align with best practices, as performance problems uncovered in later stages of testing can be more costly and require additional testing. While a basic approach to account for environmental contaminants was included in the systems engineering approach, the program office recognized that the results of the field testing highlighted additional limitations to the test approach for environmental contaminants. The BioWatch program may benefit by incorporating this lesson learned when designing future testing approaches for upgrades or enhancements to the Gen-2 program. DHS Partially Aligned with Best Practice 4 DHS included statistical experimental design in its Gen-3 testing plans in order to test performance and characterize uncertainty in the test results. However, the statistical experimental design constructed by DHS was not sufficient to estimate system performance in a realistic environment and did not link KPPs in the ORD to overall program objectives via creation and use of an appropriate model of system performance. In the analytical subsystem test, conducted at Dugway, the candidate Gen-3 system was challenged with aerosols of different concentrations in order to estimate its probability of detection for four of five BioWatch threat agents. Concentrations were systematically varied and were selected using a statistical method that was designed to yield reliable estimates of the system’s probability of detection as efficiently as possible (i.e., reducing the experimental effort required to obtain sufficiently reliable information). Statistical uncertainties were calculated for the resulting estimates, and statistical modeling was used to characterize the relationship between aerosol concentration and the system’s probability of correctly detecting the presence of each biothreat agent. Statistical experimental designs were also used in the tests of the aerosol collection component, conducted at Edgewood Chemical Biological Center, and the PCR assays, conducted at Los Alamos National Laboratory. Additionally, the Gen-3 tests included experimental conditions designed to account for factors seen in operational environments, though we have identified limitations in some of these tests, as discussed earlier in the report. DHS Could Apply Lessons Learned According to this best practice, test design should be based on a clear understanding of goals and incorporate users’ needs. This could be achieved, for example, by linking KPPs in the ORD to overall program objectives and user needs via creation and use of an appropriate model of system performance, but this was not done prior to the Gen-3 testing. Operational requirements, from which the KPPs were derived, were not clearly linked to an overall mission need or program goal. The absence of these linkages among mission need, requirements, and parameters for measuring system performance means results from the Gen-3 testing cannot speak to whether the system would address an established mission need or users’ needs. According to program officials, the operational requirements outlined in the ORD were directly linked to the approved Mission Needs Statement for Gen-3. However, in 2012, we reported that officials were aware that the Mission Needs Statement did not reflect a systematic effort to justify a capability need and we reported that the program wrote the Mission Needs Statement later to justify a predetermined solution of acquiring an autonomous detection capability. Additionally, these tests did not cover all of the system requirements specified by the ORD. For example, one of the system requirements was a maximum false positive rate. DHS noted in the TEMP that the probability of false positives would be estimated from test results using statistical techniques; however, the TEMP did not explain how this would be done. In testing, DHS did not address this. Instead DHS tested 20 times (per agent) in the assay tests and 15 times (per agent) in the Dugway tests, and drew conclusions despite not having designed experiments that would produce estimates of the system’s false positive rate with defined levels of statistical precision. By using statistical experimental designs in the testing of the Gen-3 technologies, DHS was able to characterize some uncertainty in the test results. However, DHS was not able to determine with a defined level of statistical certainty the false positive rate, and thus was not able to conclude that any tested system or system component satisfies its stated requirement. As DHS considers upgrades to the Gen-2 system or future technology switches for the BioWatch program, DHS could apply lessons learned from the Gen-3 testing to help develop meaningful requirements that are linked to a mission need and where operational objectives (e.g., lives saved) are linked to measurable KPPs, such as system sensitivity. DHS could also use statistical experimental design in future testing of upgrades or enhancements to the Gen-2 system to help characterize uncertainty in results and ensure a representative range of operating conditions are sufficiently used to test system performance. DHS’s Actions Partially Aligned with Best Practice 5 DHS created and executed a well-articulated, but incomplete, plan for measuring and characterizing certain aspects of system performance using established procedures, methods, and metrics. For example, measuring system performance included ranking and scoring by agent the system’s ability to detect known strains of an agent, including near- neighbor strains of an agent, and ability to detect agents with possible environmental contaminants present. System sensitivity was characterized using an adaptive methodology to determine, for a range of concentrations of agents, the probability of detection at each concentration. These probabilities were associated with confidence intervals to allow assessment of the range of performance one may expect. Both live and killed agents were used to assess detection sensitivity so that results from component testing could be extrapolated to whole system performance, because currently no facilities exist to perform whole-system live agent testing. Additionally, DHS evaluated suitability requirements by testing variables such as human factors; reliability, availability, maintainability (RAM); supportability; and survivability in an operational environment. DHS Could Apply Lessons Learned Whereas the TEMP and other test plans list the five KPPs for the Gen-3 BioWatch candidate systems, there is no specific link between these metrics and the mission objectives for the Gen-3 system. Further, it is not clear how the results of testing in non-representative environments would be used to determine system suitability for Gen-3 purposes. In addition to these best practices, developmental testing guidance indicates that developmental testing is intended to vet systems early to determine the suitability of the system for meeting performance requirements. By testing in a variety of modes intended to replicate an operating environment, results of the tests can be used to characterize and evaluate relevant system performance. However, the tests that were intended to account for factors seen in operational environments were of limited range and scope. For example, the test plan for the Chicago field test suggested that the system would be exposed to dust, metallic dust, smoke, diesel exhaust, and pollen, under extended temperature, altitude, and relative humidity ranges. However, there was no design of statistical tests intended to address these variations in Phase I. The analytical sub- system testing did not allow for sub-analysis of pollutants from different environments (e.g., outdoor, indoor, subway), and so it was not possible to identify effects of any specific pollutant, such as subway brake dust. Additionally, the Edgewood test of the aerosol collection component included tests at different temperatures and humidities; however, relatively few combinations of temperature and humidity were tested, a fact that limited the range of environmental conditions represented and limited the utility of their results in determining how the system would perform under various operational environments. Because DHS performed only limited testing in this regard, it was not able to draw conclusions early in the testing process to determine whether the systems would meet performance requirements. In the future, DHS could improve BioWatch testing efforts by incorporating clearly articulated measures and use of established procedures, methods, and metrics to characterize system performance. This will help ensure DHS collects the information it needs to evaluate operational suitability of upgrades or enhancements to the Gen-2 system. DHS’s Actions Partially Aligned with Best Practice 6 DHS performed some testing that could help build resilience into the system during Phase I testing, but it could improve resilience testing with more rigorous methods. According to OHA officials, the Phase I acquisition and test and evaluation (T&E) strategies were specifically designed to address the concern of identifying potential vulnerabilities in the systems under test. As part of this strategy, for example, Edgewood tested the collection efficiency of the filters under varying temperature and humidity conditions. The system was also tested in an operational environment in Chicago to assess the effects of environmental interferents, among other things, to help identify vulnerabilities in operating the system. Further, DHS recognized that developing autonomous detection had proved challenging because in addition to some of the technology required being novel, even the existing technologies—for example, the aerosol collection unit and the apparatus that reads the PCR results—had not been combined for the specific application of autonomous detection in an operational environment before. By executing assay evaluation and subsystem- and system-level characterization tests incrementally, this allowed for the insertion of decision points to reduce technical and financial risk. As a result of its multi-stage approach to testing the systems, DHS identified limitations to one vendor’s detection system that could not be overcome before proceeding with the next stages of Phase I testing, so that vendor did not complete the entire Phase I testing plan. DHS also included provisions in the Phase I contracts for engineering change proposals to allow the vendors, at DHS discretion, to address deficiencies found during testing to inform DHS decisions on proceeding to Phase II. DHS Could Apply Lessons Learned While DHS took some steps to build resilience into the Phase I testing, additional rigor could have been built into the testing design to reveal potential vulnerabilities in the performance of the systems tested. DHS’s final test in Phase I, designed to test resilience, involved fielding the detectors in Chicago to demonstrate the performance of the candidate technology’s full system in a representative environment. DHS was able to identify some limitations—such as environmental contaminants—to the detection systems being evaluated. However, this occurred late in Phase I testing, and represented only a limited cross-section of possible challenges, including temperature, humidity, and vibrations. For example, while the test plan for the Chicago test lists a range of temperatures and humidities the system is expected to operate under, the field testing did not reflect the entire operational range. Additionally, other tests on temperature and humidity conditions were performed in limited combinations, such as high temperature and low humidity, but not others, such as high temperature and high humidity. Further, these temperature and humidity tests were done only on the aerosol collection unit, and not the entire Gen-3 system. Because the Gen-3 system as a whole was expected to operate continuously under such conditions, just as Gen-2 currently operates, simply testing one component, the aerosol collection unit, under a limited combination of temperatures and humidities does not adequately test the system for purposes of building in resilience. Some of these conditions may have provided earlier indications of vulnerabilities that did not arise until near the end of Phase I testing. For example, to assess the effect of interferents, Dugway testing used pooled filter washes provided by DHS. The environments from which the filters came were not provided to Dugway and interferents from different filters were combined. In doing so, the effect of individual interferents was diluted. When the systems were tested in Chicago near the end of Phase I, DHS found that system problems occurred, attributed to petrochemicals near highways and to brake dust in subway stations. If specific interferents, such as subway brake dust, were tested prior to the Chicago testing at representative concentrations, they may have revealed issues earlier in Phase I. Other conditions, such as network communication performance, were not tested or were tested in a limited fashion. According to the final report on the Gen-3 testing, tests of network communication performance were modified based on user needs in the Chicago area where this capability would have been demonstrated. Therefore, network performance was left undetermined at the end of Phase I testing. As a result of not including more rigorous testing methods to test resilience, information about system failures in different environments was not revealed until late in the Phase I testing. By following this best practice in future testing of Gen-2 upgrades or enhancements, DHS could help mitigate risks the agency may likely face in acquiring these types of threat detection technologies. DHS’s Actions Partially Aligned with Best Practice 7 DHS used Phase I test results to determine the likelihood Gen-3 candidate systems could meet performance requirements, but revisions to performance requirements were based on a modeling study, rather than the outcome of the Phase I testing. According to OHA officials, at the time OHA began Phase I planning and execution, there was not a robust analytical capability to determine mission-based requirements for key technical parameters (such as system sensitivity). Officials said absent that, a consensus on the technical parameters was made based on the collective best judgment of OHA BioWatch Program Office and S&T Chem Bio Defense subject matter experts. According to DHS officials, the result was more of a “technology push” requirement than a mission outcome driven requirement, and was based upon what was believed to be the state of the art for PCR-based systems. However, as we reported in 2012, when the technologies were unable to meet the technology push requirement as determined by some Phase I testing, DHS encountered delays and uncertainty about how to move forward. In response to these concerns, the BioWatch Program Office conducted a detailed requirements analysis through a national lab consortium led by Sandia National Laboratory. The study assessed the utility of biodetection systems with varying levels of sensitivity in terms of detection timeliness, population coverage, and lives saved in a bioterror attack. As we reported in 2012, the study, which was completed in January 2012, contained findings that, according to BioWatch Program officials, confirm that the sensitivity requirement could be relaxed without significantly affecting the program’s public health mission. As a result, DHS set a new sensitivity requirement based on the modeling studies. DHS Could Apply Lessons Learned According to experts, developmental testing should be viewed as a tool in helping to refine performance requirements, and a meaningful performance requirement is one that is not only achievable but also strives to maximize the fulfillment of a mission need. As we reported in 2012, according to BioWatch program officials, the original sensitivity requirement was set based on interest in pushing the limits of potential technological achievement rather than in response to a desired public health protection outcome. They said that this led to a requirement that may have been too stringent, resulting in higher costs and schedule delays without demonstrated mission imperative. Further, having requirements based on operational objectives would allow the results from Phase I testing to inform the users regarding the anticipated capabilities of the system and the likelihood that a tested system would be able to meet mission needs. According to experts, developmental testing may help to identify the performance envelope of the system and inform decisions about refined performance requirements. Failure of the Gen-3 candidate system to meet the initial performance requirements resulted in delays and uncertainty regarding the Gen-3 acquisition. DHS took steps to refine the sensitivity requirement for Gen- 3, but it was the modeling study, rather than the developmental testing protocol, that led to the change. However, we found that even the relaxed sensitivity requirement did not link system performance to a clear operational objective (e.g., to detect attacks of certain types and sizes), as discussed earlier in this report. Instead, the relaxed requirement was based on ideas about the performance characteristics of the current (Gen-2) system—the idea being that Gen-3 could be less sensitive than Gen-2 but still achieve comparable public health outcomes because of its greater speed. According to DHS, the change in sensitivity requirement was linked to casualty reduction, and the agency does not agree with our assessment. However, as noted earlier in this report, the performance of Gen-2 has not been linked to a clear operational objective; therefore, because the revised sensitivity requirement for Gen-3 was based on Gen- 2, it was not grounded in an operational objective, either. In any future acquisition, upgrade, or enhancement to Gen-2, having initial requirements based more closely on mission need and operational objectives may prevent delays and uncertainty. In its Post Implementation Review, DHS also recognized the need to better communicate with stakeholders about using a flexible testing approach to refine requirements to avoid any misperception that the requirements would be adjusted to accommodate the vendor’s capabilities. DHS’s Actions Aligned with Best Practice 8, as Applicable DHS took steps to engage in a continuous cycle of improvement during the Gen-3 acquisition, but not all components of this practice apply, as DHS never reached the stage of operational testing. In the Gen-3 TEMP, DHS describes an integrated test approach, designed as a continuum from developmental testing through operational testing, with the previous test events acting as the foundation for the follow-on events. However, given the expectation of rapid acquisition and deployment, DHS missed opportunities to fully engage in a broader testing approach needed for a novel system approach to biodetection. By not engaging in a more rigid testing approach, when the autonomous detection systems tested did not yield favorable outcomes, it raised too much uncertainty about cost and performance for the program. As a result, DHS canceled the Gen-3 acquisition and prepared lessons-learned documents for the Gen-3 acquisition that are intended to inform the BioWatch program’s actions in the future. This could include applying the lessons learned to improvements or upgrades to the existing Gen-2 system. OHA officials reported that following the Phase I testing, the BioWatch program facilitated lessons-learned conferences that included relevant stakeholders. For example, OHA officials said results from the Phase I testing indicated that the basic approach used to assess the technology worked well and appeared to answer the relevant questions as to the readiness of a technology considered for deployment. Specifically, they said testing of major subsystems independently was useful as was the chamber testing conducted using a killed (nonviable) agent. Other positive aspects included the usefulness of an interagency test and evaluation Working Integrated Product Team, and the value of independent test agencies. According to OHA, lessons learned include the need for widely accepted PCR Assay Performance standards (recently reviewed by a National Academies consensus committee) and the establishment and enforcement of operational performance criteria during testing to avoid repeated adjustments such as those that were made by one of the vendors to its agent identification algorithm and assay chemistry. According to OHA officials, these adjustments delayed the testing and increased costs. DHS Identified Key Lessons Learned from the BioWatch Gen-3 Acquisition 1. Any policy decision to accelerate a future acquisition of detection technology should be fully documented, capturing the justification for urgency, an understanding of the limitations of current technology capabilities, and the minimum acceptable (non-technical) requirements needed to achieve the improvement envisioned. 2. Deliberations regarding biodetection research and development during an acquisition process should have pre-designated forums that are capable of resolving issues (scientific or otherwise) as they arise. An inability to reach resolution or consensus should be documented and openly acknowledged, allowing for the development of an adjusted path forward for the acquisition process, if necessary. 3. Communication efforts should be increased to Department stakeholders and leadership, both to improve efficiency of the acquisition process and to fully document an acquisition’s updates to any requirements, timelines and/or procedures. 4. State and local government stakeholders, especially in locally-operated programs (such as BioWatch), should be integrated into the requirements generation process as early as possible, recognizing their ultimate role as the end-user. In April 2014, after assessing the results of the Phase I testing outcomes, DHS canceled the Gen-3 acquisition when unfavorable testing outcomes raised too much uncertainty about cost and performance of the autonomous detection systems tested. While some might consider the cancellation of the Gen-3 acquisition a failure, by carefully weighing the pros and cons of moving forward with an acquisition that had not produced favorable results, DHS incorporated parts of the best practice of continuous improvement and avoided greater expense for a system that had not met program requirements. By applying the lessons learned that DHS identified as a result of evaluating the Gen-3 acquisition, as well as those identified through application of these best practices for testing, DHS can continue to engage in a continuous cycle of improvement for its testing and acquisition of detection technologies as it considers upgrades or enhancements to the Gen-2 system. BioWatch Gen-3 is one of several DHS technical system acquisitions that have faced challenges because of system immaturity or unreliable performance in an operational environment. In some cases, DHS canceled the acquisitions after testing did not yield favorable results. For example, we previously found the Secure Border Initiative Network (SBInet) testing did not appropriately account for risk or provide sufficient information to ensure system performance. Aiming to enhance border security and reduce illegal immigration, DHS launched SBInet to create a “virtual fence” along the border using surveillance technologies. However, as with our 2012 findings on the Gen-3 acquisition, we found the SBInet Program Office had not effectively performed key requirements of development and management practices. For example, some operational requirements for SBInet, which are the basis for all lower-level requirements, were found to be unverifiable, and we concluded that the risk of SBInet not meeting mission needs and performing as intended was increased. As noted above, the Gen-3 acquisition also did not have operational requirements directly tied to mission need. Additionally, we found none of the SBInet plans for tests of system components addressed testing risks and mitigation strategies. As noted above, assessing technical and performance risk in an acquisition can help mitigate cost and schedule problems. Although we made several recommendations aimed at improving the rigor and discipline of SBInet testing, DHS canceled the SBInet acquisition in January 2011, in response to internal and external assessments that identified concerns regarding the performance, cost, and schedule for implementing the systems. Similarly, we previously reported that a primary lesson learned regarding testing of the Domestic Nuclear Detection Office’s (DNDO) advanced spectroscopic portal (ASP) was that the push to replace existing equipment with new technology led to a testing program that lacked the necessary rigor. The ASP was a type of portal monitor designed to both detect radiation and identify the source to reduce both the risk of missed threats and the rate of innocent alarms. DNDO considered these to be key limitations of radiation detection equipment used by Customs and Border Protection (CBP) at U.S. ports of entry. Based on our body of work on ASP testing, one of the primary lessons to be learned is to avoid the pitfalls in testing that stem from a rush to procure new technologies. We have previously reported on the negative consequences of pressures imposed by closely linking testing and development programs with decisions to procure and deploy new technologies. In the case of ASPs, as well as the Gen-3 acquisition, the push to replace existing equipment with the new portal monitors led to a testing program that initially lacked the necessary rigor. DNDO’s schedule consistently underestimated the time required to conduct tests, resolve problems uncovered during testing, and complete key documents, including final test reports. We also found that testing of the ASPs lacked sufficient rigor to demonstrate performance of the detectors in an operational environment. For example, ASP testing did not include a sufficient amount of the type of materials that would mask or hide dangerous sources that ASPs would likely encounter at ports of entry, which is fundamental to the performance of radiation detectors in the field. As noted above, Gen-3 testing of possible environmental contaminants also lacked sufficient rigor to identify potential vulnerabilities in the system’s detection capabilities prior to placing them in the field. Despite several recommendations we made to DHS on ways to improve the testing and management of the ASP acquisition, because of unsatisfactory test results, ASP did not pass field validation testing, which led DHS to cancel the program in October 2011. DHS has taken several steps in recent years to improve acquisition management in response to our previous recommendations. By establishing a policy that largely reflects key program management practices, dedicating additional resources to acquisition oversight, and improving documentation of major acquisition decisions in a more transparent and consistent manner, DHS has improved its ability to manage acquisition programs. The decision to cancel the Gen-3 BioWatch acquisition is an example of improved oversight, and DHS could continue to improve by implementing some lessons learned from the Gen-3 acquisition. In April 2015, we reported that DHS’s Director of Operational Test and Evaluation has expressed interest in becoming more involved in testing earlier in the development process to increase influence over program execution. The Director told us that determining how test activities should inform key decisions would help mitigate risk for all types of programs. As DHS considers its options regarding the currently deployed Gen-2 BioWatch system, including possible technology enhancements or even future technology switches, future BioWatch acquisition and testing efforts may benefit by incorporating the lessons learned from the Gen-3 Phase I testing and other recent DHS acquisitions that have faced testing challenges. PCR is the most mature and sensitive technology for an autonomous detection system, and DHS is considering autonomous detection as an upgrade to Gen-2. While autonomous detection may provide benefits that include reduction in casualties and clean-up costs and greater cost- efficiency, the potential benefits of an autonomous system for BioWatch depend on specific assumptions, some of which are uncertain. For example, reductions in casualties would depend on a rapid, coordinated response from multiple entities at the federal, state, and local levels; whether such a response would materialize is uncertain and partially outside DHS’s control. Further, an autonomous detection system would have to address several likely challenges, including minimizing possible false positives, securing a networked detection and communication system, and operating under various environmental conditions. The most mature analysis technology for an autonomous detection system is currently PCR, according to a National Academies report on promising technologies for autonomous detection for BioWatch and interviews with stakeholder officials, including CDC, the DHS BioWatch program manager, and other experts. As mentioned earlier, while DHS canceled the Gen-3 acquisition of an autonomous detection system for BioWatch, OHA and S&T are collaborating to address the capability gap that Gen-3 intended to fill by evaluating upgrades or enhancements to the current Gen-2 system. According to DHS officials, autonomous detection is among the technologies being considered. The National Academies report presented perspectives from local officials and technological assessments, gathered from a workshop that DHS requested, to explore alternative cost-effective systems that would meet the needs for a next- generation BioWatch system. This proposed next-generation system was intended to operate autonomously to detect BioWatch threat agents from aerosol samples. The National Academies report described the state of autonomous detection technology in 2013 and evaluated four broad classes of technologies (see table 3). Those technologies were PCR, immunoassays and protein signatures, genomic sequencing, and mass spectrometry. Three Key Terms for Understanding Autonomous Detection Technologies Genes: Genes are sections of nucleic acids that determine how proteins are made. Nucleic acids, such as deoxyribonucleic acid (DNA), are long chains of molecules made up of bases, of which there are four possible kinds. The order of the bases represents the sequence of the DNA. Because proteins determine much of the function of an organism, sequencing the genes can provide information about its identity. The set of all genes of an organism is known as its genome. Protein signatures: Proteins are long chains of building blocks called amino acids. Because there are many types of amino acids, the way each protein interacts with different stimuli, such as light, can be unique. For example, shining one color of light on a protein can result in its emitting different colors. The set of colors the protein gives off can be considered an “optical fingerprint” of the protein, commonly referred to as a type of protein signature. Antibodies: Antibodies are proteins created by the immune system that attach to specific chemicals on the surface of disease-causing organisms, resulting in the organisms’ being rendered harmless. Antibodies can be designed to attach to a given target and modified to be detectable under certain conditions. Thus it is possible to use antibodies to see whether a target is present by soaking a sample with the antibodies, washing off unattached antibodies, and then measuring those that remain. PCR, which is used to detect nucleic acid signatures, is used to amplify and detect genetic material, or nucleic acids, of organisms. By amplifying (i.e., repeatedly duplicating) those sections of genes associated with certain biological agents, it is possible to distinguish the agents among various organisms. Because of the amplifying capability of PCR, small amounts of genetic material are sufficient for detection, resulting in high sensitivity for this technology. Specificity can be high if the sections of the genes being amplified are unique to the agent. However, related organisms, called genetic near-neighbors, may contain similar gene sections and lead to a PCR detection when the agent itself is not present. PCR is the method used in the current (Gen-2) BioWatch system. Immunoassays and protein signatures use antibodies or light to identify organisms. Immunoassays use antibodies that attach to chemicals that primarily appear on certain biological agents; thus immunoassays can be tailored for high specificity. Protein signatures analyze how light interacts with different chemicals (such as proteins) on target agents, using light “signatures” emitted by specific proteins to be identified. However, neither of these methods is as specific as PCR. Also, because there is no amplification, the sensitivity of these methods is not as high as that of PCR. Genomic sequencing provides a genetic sequence for all or part of a detected organism’s genes. Because each agent contains unique genetic sequences, this method is very specific and could eventually provide information regarding antibiotic susceptibility. However, the method is not considered standalone since it depends on another method, such as PCR, to work. If used with PCR, then this method is also very sensitive. Of the four broad technologies examined by the National Academies, this method is also the least mature because of issues with systems integration—for example, developing the software to perform the analysis locally (within the device itself). Mass spectrometry breaks apart a sample (for example, by directing a laser onto the sample) and analyzes the resulting fragments. Different chemicals yield different types and amounts of fragments, so it is possible to reconstruct the chemical composition of the original sample. Because the chemical make-up of agents is unique, it is possible to identify their presence in the sample. Mass spectrometry is not as sensitive or as specific as PCR. We identified key potential benefits of an autonomous detection system from discussion with agency officials, a review of agency and national laboratory documentation, and a literature review. Most of these potential benefits were owing to faster detection; however, we determined that the extent to which faster detection confers benefits depends on specific assumptions, some of which are uncertain and some of which are outside of DHS’s control. Additionally, from our review of literature, we identified potential benefits that included decreasing user errors, such as dropped collection filters. However, since these benefits depend on the actual design and implementation of the system, it is difficult to predict the extent to which they would be realized. The benefits and challenges discussed in this report apply broadly to autonomous detection; that is, they do not depend on which of the four broad classes of technology is deployed. According to a 2011 National Academies report, an autonomous detection system could detect agents more quickly than the Gen-2 system because of a shorter sample collection period, elimination of sample transport, and completion of the detection step within the system itself (see fig. 3). In particular, the report stated that while the current Gen-2 BioWatch system could detect agents in 10-36 hours, an autonomous detection system could detect agents in as little as 4-6 hours. Further, DHS officials and Sandia modeling studies state that faster detection enabled by automation can provide: 1. reduction in casualties and/or fatalities because of faster detection and faster situational assessment; 2. lowering costs, including clean-up costs by halting the entrance of transport vehicles, such as trains or airplanes, into contaminated areas; and 3. reduction in the total annual cost of the detection system, per detection cycle. DHS officials told us that an autonomous detection system would offer many of these benefits but did not provide evidence to support them, saying that their assertion is “common sense.” DHS officials also referred to Sandia’s modeling studies. However, we determined that these benefits and the conclusions of the Sandia modeling studies depend on specific assumptions, some of which are uncertain and some of which are outside of DHS’s control, although DHS officials stated that they believe the modeling assumptions are reasonable for the intended purpose. However, a CDC official cautioned against relying on models to determine program effectiveness. The number of lives saved from faster detection could not be determined, because some key factors affecting response time are uncertain. For example, the time it takes decision makers to determine that a detection represents a threat to public health and warrants dissemination of medical countermeasures is variable. The time between a BAR and dissemination of medical countermeasures may include the time needed to characterize the incident, determine who was exposed, make decisions regarding evacuation of contaminated regions and relocation of individuals, determine where to set up medication “points of dispensing” and to actually mobilize the medication stockpile, and distribute medication to potentially exposed people and keep track of who received medication. According to the National Academies report and current DHS guidelines, steps taken after detection, to instill confidence for requesting medication, include assessing known threats, conducting additional local lab work, and initiating a national conference call (see fig. 4). There may be additional tasks such as culturing of the agents to determine their viability and antibiotic resistance. If local stakeholders follow the guidelines, there could be considerable delay prior to mobilization of stockpiled medication. For example, an official at the National Academies workshop reported that he takes an additional hour to perform an assessment prior to any national conference call. According to another National Academies report, the BioWatch national conference call usually occurs 1-2 hours after a local call of the BioWatch Advisory Committee. Thus, the time between an attack and when medication is fully distributed—and the number of lives potentially saved by minimizing this time—could vary from jurisdiction to jurisdiction. DHS officials agreed that the jurisdictional response can vary and conducts exercises and training to help plan for a response. However, it is not clear what effect such exercises have on response time variability. In addition, faster detection may not be the most effective way to save lives. For example, a modeling study showed results indicating that an attack detected in 2 days, but requiring 10 days to distribute medication, would result in more deaths than an attack detected in 5 days, but requiring only 2 days to distribute medication. Thus, according to this model, a jurisdiction that shifts resources from medical distribution capacity to faster detection may end up with more deaths. Decisions of resource prioritization are not under BioWatch program control, neither is the part of the response involving medication or other intervention. However, the benefits from early detection depend on such resource prioritization and effective overall responses. DHS officials agree, noting that biosurveillance is a coordinated, holistic endeavor. Sandia ran a response model to estimate, among other things, the number of casualties and fatalities given the time that passes between the attack and detection (time to detection). Sandia’s modeling studies showed that a faster detection system would reduce the number of casualties and fatalities, but that the extent of these reductions would depend on assumptions in the model. One such assumption was the probability that BioWatch correctly detected the attack. As discussed earlier in this report, the probability that BioWatch correctly detects an attack depends on many factors, including the performance characteristics of the technology and the characteristics of the attack itself. Many of Sandia’s estimates of the life-saving benefits of a detection system—automated or not—are downstream analyses presuming that an attack was detected. If the attack was not detected, the faster response enabled by a detection system would not occur, and there would be no life-saving benefits from operating such a detection system. Therefore, if those results are read out of context of this presumption, the expected reductions in casualties and fatalities may be overestimated. This limitation applies to autonomous detection as well as the Gen-2 system. Estimated reductions in casualties and fatalities from faster detection also depend on assumptions about the infectivity of the BioWatch threat agents. In the Sandia modeling studies, infectivity was represented by infectious dose estimates—that is, estimates of the doses that would lead to illness; however, we found uncertainty in these estimates. As described in our earlier discussion of the current (Gen-2) system, Sandia researchers and other experts told us there is considerable uncertainty in even the best available infectious dose estimates for anthrax, as these estimates are based on data from nonhuman primates. Finally, the life-saving benefits of faster detection that Sandia reported varied significantly, depending on the properties of the illness that the agent caused. These properties included how long it takes for symptoms to exhibit in a patient after exposure and how effectively medication can prevent death in ill people. According to Sandia, some agents act very slowly—that is, they have long incubation periods—which diminishes the effect of faster detection. For example, an agent that takes over 7 days to cause symptoms will be detected by a 36-hour and a 4-hour detection system with similar outcomes. Another factor is how effectively a developed illness can be treated. According to Sandia, this factor is also variable, so that for some agents, the numbers of lives saved in shifting from 36-hour to 4-hour detection change little. Thus, reducing fatalities by faster detection depends largely on the agent used in an attack. Sandia reported that faster detection improves the ability to divert transport vehicles—such as trains and airplanes— from contaminated areas so that they do not have to be subsequently cleaned up. The benefit’s extent is uncertain because it depends on the amount of traffic entering a given location. For example, according to Sandia, the number of subway cars entering New York City’s Grand Central Terminal over a period of 5 hours can range from as few as 250 to as many as 1,750, depending on the time of day and the day of the year. Clean-up effort reduction is thus uncertain because of an attack’s unpredictability. A similar analysis can be made for people entering a contaminated area— while early detection could lead to exposure prevention and mitigating the need for additional medication, the actual number of people affected is similarly variable. We recognize that much of the uncertainty described regarding lives saved or reduction in clean-up costs is out of the control of the BioWatch program. However, when describing benefits of faster detection, particularly concerning the number of lives possibly saved by an autonomous system or any early-warning system, it is important to understand the uncertainties in these assumptions, such as response time or infectious dose of the agent. Without a comprehensive enumeration of these assumptions and their effects on the modeling results, assertions regarding the value of autonomous detection systems are questionable. In 2012, we found that DHS performed a limited cost trade-off assessment of switching to an autonomous detection system that focused on cost per detection cycle—that is, the cost each time an autonomous detector tests the air for agents versus the cost each time a Gen-2 filter is manually collected and tested in a laboratory. We reported in 2012 that cost per detection cycle was lower with an autonomous detection system, but that overall annual program costs would increase from the current Gen-2 system program costs. From figures DHS provided in 2015 regarding the cost of switching to an autonomous detection system with coverage comparable to that of the current Gen-2 system, our analysis yielded results similar to our 2012 findings. To determine potential cost savings between the Gen-2 system and an autonomous system, DHS compared an autonomous system with a modified Gen-2 BioWatch system. Gen-2 generally runs one detection cycle daily, but for DHS’s analysis the agency compared an autonomous system with a modified Gen-2 system which would run three daily detection cycles. We determined that total annual program costs would increase if current operations for Gen-2, which involve one detection cycle per day, were replaced with an automated detection system. Only by comparing the total annual program costs of operating Gen-2 with three detection cycles per day with the total program costs of operating an automated system were cost savings realized (see table 4). As we reported in 2012, conducting a more complete analysis of costs and benefits would help DHS develop the kind of information that would inform trade-off decisions regarding changes to BioWatch technology. Automation may lead to additional potential benefits including fewer user errors and greater efficiency—for example, using fewer resources to accomplish the same amount of work—and greater worker safety by facilitating the handling of dangerous materials, according to literature on automation. However, because automated detection systems have not been deployed, assessing these benefits is difficult. Additionally, uncertainty about how the technology will work means that its benefits might be countered by new problems. For example, according to a 2007 DHS Inspector General report, transferring BioWatch system filters was done improperly several times in 2004. By eliminating the need for transporting filters, automation could avoid this problem, but new problems could arise, such as system crashes. For example, repeated system crashes occurred when the BioWatch Program Office conducted a trial deployment of an autonomous detection system, in New York in 2008. Thus, it is not clear that an autonomous system would realize these benefits. The challenges an autonomous detection system must overcome include ensuring its detection sensitivity and protecting against threats to networked communications. From a National Academies workshop and interviews with agency officials, we identified five likely challenges (shown in fig. 5). According to DHS, ensuring that the autonomous detection system meets BioWatch sensitivity requirements represents a major technical hurdle. As discussed earlier, the original sensitivity requirement for the Gen-3 system was based on a technology push because DHS lacked the analytical tools needed to generate a mission-based sensitivity requirement. DHS later revised the sensitivity requirement based on a Sandia-led modeling study. As we described earlier, the Sandia model is subject to limitations and assumptions, and how the sensitivity requirement may be linked to mission outcomes, such as detecting attacks that lead to 10,000 casualties, remains uncertain. Additionally, challenges may be associated with designing a technology to meet a given sensitivity requirement. One way to manage sensitivity requirements is to assess whether the technology in a detection system conforms to performance standards. The standards may be subject to validation by independent groups or agencies, and constitute guidelines for the technology. For example, the number of times a test should be run and the verification of reagents are standardized so that results can be interpreted meaningfully. With the development of newer technologies for detection, a method known as multiplexing is being increasingly used. However, validating the use of multiplexing in detection systems has no performance standards. DHS commissioned the National Academies to examine performance standards for PCR, including multiplexed PCR. However, the report recently released by the National Academies does not provide clear standards for multiplexing, instead noting that combining Food and Drug Administration (FDA) multiplexing guidance with certain standards, such as SPADA, which discuss multiplexing, should form a starting point for validation testing. The report also notes that changing to multiplexing (from singleplexing) reduces the sensitivity of the assay, although the effect of this reduction is unclear. Given that PCR is the most mature technology for autonomous detection systems, implementing other technologies may require similar, or greater, effort in establishing performance standards. A challenge for deploying autonomous detection systems identified by participants at the National Academies workshop is the avoidance of false positive readings—readings that indicate the presence of an agent that is not present. False positive readings can lead to major disruption from shutting down crucial transportation and economic facilities (such as airports and shopping centers—referred to as high-consequence actions) and to the unnecessary medication of an uninfected population—which can lead to adverse effects and medical stockpile waste. Local public health officials stated that false positives are likely to be a bigger issue with autonomous detection systems, because operating more detection cycles could increase their frequency. According to the National Academies report, another common concern among public health officials is their credibility when making high- consequence decisions. At the workshop, officials stated that the integrity of public health is critically important, and thus they needed complete confidence in an autonomous system, which is intended to provide results without human interaction or interpretation. Similarly, according to a Sandia workshop in 2009, public health officials largely felt that wrongly taken high-consequence actions would result in loss of credibility. Finally, an LLNL scientist stated that debugging a complex system to determine whether a potential false positive occurred may be an issue with some autonomous systems. He noted that false positives from naturally occurring genetic near-neighbors of the BioWatch threat agents might be a particular challenge and that DHS has made limited investments in determining background DNA signatures to address this issue. DHS officials stated they use data gathered from current operations to assess such background signatures, but it is unclear whether this approach would be effective for an autonomous detection system. According to DHS and CDC officials, another challenge autonomous detection systems face is securing the networked communication system against interference, such as from hackers. DHS officials stated that the security of network communications for transmitting results to the local officials was an important issue for autonomous detection systems. For example, during the Gen-3 effort, DHS officials specifically planned for testing of network security as described in the TEMP. DHS officials stated that an unsecure system would be vulnerable to hackers’ planting results or shutting systems down. In 2012, we reported that the 2011 Operational Assessment stated that failure to demonstrate network security may seriously inhibit user confidence in the system. A CDC official also noted that network communication is an area of concern, citing previous issues with the deployment of related technologies. DHS identified data management challenges for autonomous systems, including reviewing the reported data and interpreting the data to determine appropriate follow-up actions. A participant at the National Academies workshop expressed concern over a system that would provide data every few hours, leading to strain on limited and diminishing public health resources. DHS describes system data as containing information on how the detector was functioning as well as laboratory analysis data. According to DHS, data from an autonomous detection system would need to be reviewed by local or state staff across the 24/7 reporting period. Further, those staff would need to be trained in appropriate data interpretation. According to DHS officials, the cost for these local public health resources is not included in their cost projections of the autonomous BioWatch calculations. Finally, DHS officials stated that keeping the autonomous detection system continuously functioning in a dirty environment is challenging. Additionally, an LLNL official stated that a dirty environment can contain chemicals that interfere with the technology used to detect the agent. As we reported in 2012, and according to the 2011 Operational Assessment on Gen-3, autonomous detection systems during the Gen-3 acquisition experienced malfunctions, exhibited issues with the positive controls, and required unscheduled maintenance, attributed to either traffic emissions due to proximity to an interstate, or to metallic dust generated by train brakes. This underscored the challenge of an autonomous detection system needing to operate in different operational environments. In addition, according to the Analysis of Alternatives conducted by the Institute for Defense Analyses in 2013, detection systems are vulnerable to vandalism and accidents. BioWatch’s rapid deployment in 2003—to provide early detection of potentially catastrophic aerosolized biological attacks—did not allow for sufficient testing and evaluation against defined performance requirements to understand the system’s capabilities. Since that time, DHS has commissioned tests of the system, but has not defined technical performance requirements that would link test results to conclusions about the types and sizes of attack that the Gen-2 system could reliably detect. DHS has also commissioned modeling and simulation studies, but none of these studies was designed to directly and comprehensively assess what is known about the capabilities of the currently deployed system, using specific test results and accounting for statistical and other uncertainties. Finally, while DHS has addressed certain limitations in testing, it has not systematically tested the system against realistic conditions, and there remains potential to reduce risk and uncertainty in what is known about the system’s capabilities when deployed in a real- world environment. As a result of these gaps and limitations, considerable uncertainty remains as to the types and sizes of attack that the Gen-2 system could reliably detect. DHS officials have stated that the system’s operational objective is to detect attacks large enough to cause 10,000 casualties, but DHS cannot conclude with any defined level of statistical certainty that the system can reliably achieve this objective. In the wake of the cancellation of the Gen-3 acquisition, DHS is planning for technology upgrades or improvements to the Gen-2 system, and some Gen-2 equipment is nearing the end of its life-cycle and will need to be replaced if the program is to continue. However, effective and cost- efficient decisions cannot be made regarding upgrades and reinvestments if the operational capabilities of the Gen-2 system are uncertain. Assessing the operational capabilities of the Gen-2 system against technical performance requirements directly linked to an operational objective, incorporating specific test results, and explicitly accounting for statistical and other uncertainties would help ensure that decisions about future investments are actually addressing a capability gap not met by the current system and address a clear mission need. In recent years, DHS has canceled major acquisitions that we previously found could have been more rigorous in their test design or execution, including Gen-3. The nation’s ability to detect threats against its security requires judicious use of resources directed toward systems whose capabilities can be demonstrated. Applying lessons learned from the Phase I testing of Gen-3 candidate technologies, as well as incorporating the best practices we identified, may help enable DHS to mitigate risk in future acquisitions for these types of threat detection technologies. Specifically, DHS could apply them to the BioWatch program once informed decisions have been made regarding upgrades or enhancements to Gen-2. Furthermore, DHS officials have continued to express interest in an autonomous detection capability as a possible upgrade or enhancement to Gen-2. If DHS were to pursue an autonomous detection system in the near future, PCR would be the most mature technology available. However, the extent to which the potential benefits of such a system would materialize is uncertain, because of uncertainty in the assumptions upon which these benefits depend. Additionally, pursuit of such a system faces several likely challenges. Understanding the inherent challenges to faster detection and contextualizing the benefits of autonomous detection technologies will help decision makers make informed decisions regarding use of limited resources. BioWatch is just one biosurveillance activity used to detect potential biological threats to our national security, and as we have previously reported, because the nation does not have unlimited resources to protect the nation from every conceivable threat, it must make risk-informed decisions regarding its homeland security approaches and strategies. In July 2012, the White House released the National Strategy for Biosurveillance to describe the U.S. government’s approach to strengthening biosurveillance, but it is too soon to tell what effect the strategy and corresponding implementation plan may have on determining resource allocation priorities across the interagency. As some Gen-2 equipment reaches the end of its lifecycle, DHS will need to make decisions about investing in the future of the BioWatch program. DHS initiated the BioWatch program in 2003 to address the perceived threat at the time. Since then, numerous Bioterrorism Risk Assessments have been issued, but these have been criticized for the methodology, and none has been issued in the last 5 years. Consequently, as the National Academies has noted, there is considerable uncertainty about the likelihood and magnitude of a biological attack. Investment decisions about the future of BioWatch should be guided by the agreed-upon priorities of the various stakeholders within the biosurveillance community to help ensure investments address the current threats posed by biological hazards. To help ensure that biosurveillance-related funding is directed to programs that can demonstrate their intended capabilities, and to help ensure sufficient information is known about the current Gen-2 system to make informed cost-benefit decisions about possible upgrades and enhancements to the system, the Secretary of Homeland Security should direct the Assistant Secretary for Health Affairs and other relevant officials within the Department to not pursue upgrades or enhancements to the current BioWatch system until OHA: establishes technical performance requirements, including limits of detection, necessary for a biodetection system to meet a clearly defined operational objective for the BioWatch program by detecting attacks of defined types and sizes with specified probabilities; assesses the Gen-2 system against these performance requirements to reliably establish its capabilities; and produces a full accounting of statistical and other uncertainties and limitations in what is known about the system’s capability to meet its operational objectives. To help reduce the risk of acquiring immature detection technologies, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for Health Affairs, in coordination with the Under Secretary for Science and Technology, to use the best practices outlined in this report to inform test and evaluation actions for any future upgrades or changes to technology for BioWatch. In written comments provided in response to our draft report, DHS concurred with our recommendations and described actions the agency is taking to address them. DHS also provided technical comments, which we incorporated as appropriate. DOE provided technical comments, which we incorporated as appropriate. CDC and DOD reviewed the draft report and provided no comments. DHS’s written comments are reproduced in full in appendix IV of this report. DHS concurred with our recommendation to establish technical performance requirements, including limits of detection, necessary for a biodetection system to meet a clearly defined operational objective for the BioWatch program by detecting attacks of defined types and sizes with specified probabilities. DHS stated the BioWatch program has already completed a series of tests that establish the performance and capabilities of currently deployed technologies and provide baseline performance requirements for any future technological improvements. DHS also stated the BioWatch program will consider including additional measures of system performance, such as probability of detection, to augment and validate the system’s ability to detect attacks, pending available resources and at a time yet to be determined. However, using existing test results as a baseline for future technological improvements provides no information about the current—or any future— system’s ability to meet a clearly defined operational objective. DHS should first establish requirements for the current system, which will enable DHS to assess its system performance measures, such as sensitivity, against its stated mission goal: to detect attacks causing 10,000 casualties. Without establishing such performance requirements, the agency does not know what the existing test results mean for the system’s ability to detect attacks, and thus cannot establish the benefits of any future improvements. Further, DHS mentioned using additional system performance measures to augment and validate the system’s capability. We emphasize that the program’s current preferred measure of system performance, fraction of population protected (Fp), does not have a clear linkage to the system’s operational objective. What is needed is a measure that directly supports conclusions about the system’s ability to meet its objective by detecting attacks of defined types and sizes. DHS concurred with our recommendation to assess the Gen-2 system against the performance requirements described above to reliably establish its capabilities, and stated that the results of the testing and evaluation events referred to above have already been incorporated into existing modeling and simulation studies. DHS also stated, however, that should a significant difference between two performance measures—Fp and probability of detection (Pd)—be observed, the BioWatch program would consider additional modeling and simulation studies to determine the performance capabilities of the deployed Gen-2 BioWatch system. While DHS’s response suggests that it has largely met this recommendation already by having tested the system and incorporating the test results into modeling and simulation studies, this conflicts with what we have been told by BioWatch program officials. These officials told us that they have not commissioned or produced an analysis in which the best available test results have been combined with modeling and simulation to reach specific conclusions about the system’s ability to detect attacks of defined types and sizes. As we detailed in the report, modeling and simulation studies commissioned by DHS either considered ranges of hypothetical values for the system’s sensitivity or else involved old test results, based on an earlier version of the system, for just one of the BioWatch threat agents. Furthermore, we identified important limitations in the tests DHS has conducted that could be addressed through a more systematic approach to reducing risk and uncertainty in what is known about the system’s capabilities when deployed in a real- world environment. While it is true that the system cannot be tested directly by releasing live biothreat agents in the environments where BioWatch is deployed, both the National Research Council and subject matter experts with whom we spoke identified methods by which the system can be tested and its performance characteristics estimated in more realistic environments. DHS concurred with our recommendation to produce a full accounting of statistical and other uncertainties and limitations in what is known about the system’s capability to meet its operational objectives. DHS stated it already has sufficient understanding of the statistical uncertainties and limitations associated with testing and modeling of the BioWatch system. However, DHS also agreed that there is value in consolidating this information into a single, comprehensive document and plans to do so by April 30, 2016. As described in the report, DHS does not have a sufficiently comprehensive accounting of the uncertainties and limitations in what is known about the system’s capabilities. A comprehensive analysis of uncertainties and limitations should account for how such uncertainties and limitations affect the key outcome: the system’s ability to meet its operational objective by detecting attacks of defined types and sizes. Statistical uncertainties should be represented with clearly defined confidence intervals; for uncertainties that are difficult to quantify, such as uncertainties associated with testing in chambers rather than operational environments, the judgments of subject matter experts may be useful. This full accounting of uncertainties and limitations should be provided to administration and congressional decision makers so they better understand the precision in what is known about the system’s capabilities in an operational context. Decision makers should be able to use this information not only when comparing the costs of the current system to the benefits it may provide, but also when weighing decisions about proposed upgrades or enhancements to the system. DHS concurred with our recommendation to use the best practices we outline in the report to inform test and evaluation actions for any future upgrades or changes to technology for BioWatch. DHS stated that changes to BioWatch will adhere to new DHS acquisition guidance that incorporates the best practices outlined in our report. DHS’s reference to new acquisition guidance is to DHS-wide guidance that was issued in 2010, after DHS began testing the Gen-3 technology. This guidance includes additional detail on factors to consider when planning and testing new acquisitions and addresses many of the practices described within this report. However, when it comes to ensuring the acquisition will not only meet technical requirements but also perform as intended in an operational environment, we believe more robust testing earlier in the acquisition to test resilience can help reduce the risk of acquiring immature technologies. This is especially important for a system like BioWatch, which cannot be fully tested in an operational environment. While we see proper implementation of DHS’s updated acquisition guidance as a positive step towards addressing our recommendation, as we reported in April 2015, DHS’s Director of Operational Test and Evaluation expressed interest in becoming more involved in testing earlier in the development process. Therefore, we believe the lessons learned on Gen-3 testing and full adoption of testing practices aimed at establishing the operational performance of a system earlier in the acquisition should also be considered to help inform future DHS decisions. While DHS concurred with the three parts of our first recommendation, the agency did not agree with key findings that led to these recommendations; therefore, it is important to address parts of their response for clarification. DHS took exception to our conclusion that it has not defined technical performance requirements that would link test results to conclusions about the types and sizes of attack that the Gen-2 system could reliably detect. DHS stated it uses the metric called fraction of population covered (Fp) to make this linkage. However, when asked about this directly, agency officials declined to explain how specific values of Fp would enable DHS to conclude what types and sizes of attack the system can detect. Furthermore, officials said they have not commissioned or produced an analysis in which the best available test results are used to calculate Fp values and draw conclusions about the system’s ability to detect attacks of defined types and sizes. How a given value of Fp would provide information about the types and sizes of attacks BioWatch Gen-2 can detect remains uncertain, and how Fp relates to the probability of detecting attacks large enough to cause 10,000 casualties—DHS’s stated objective for the BioWatch program—remains unclear. As we note in this report, we recognize that Fp is a useful metric for certain purposes, but it does not directly support conclusions that align with the BioWatch operational objective. DHS stated that it disagreed with the conclusion that the BioWatch Program does not incorporate empirical data gathered on the current Gen-2 system to inform modeling and simulation studies. However, DHS incorrectly attributed this conclusion to us. We did not state that DHS did not use any empirical data to inform their modeling and simulation studies. We stated that (1) the modeling and simulation studies did not incorporate specific, best available test results (for example, particular estimates of the system’s limits of detection) to draw specific conclusions about the BioWatch Gen-2 system’s capability to detect attacks of defined types and sizes, and (2) the modeling and simulation studies did not incorporate uncertainties in the empirical test results that are important for understanding the precision or confidence in the modeling and simulation results. Finally, DHS acknowledged the evolving threat of bioterrorism and its continued commitment to following DHS-wide acquisition policy for any future upgrade or enhancement to the current BioWatch system. Analogous to what we reported in 2012 regarding the Gen-3 acquisition, by ensuring any future upgrades or enhancements to the BioWatch system align with the earliest steps in DHS’s acquisition process, such as being grounded in a justified mission need, and reflect a systematic analysis of costs, benefits, and risks, DHS can gain assurance that it is pursuing an optimal solution. Because, as DHS stated, the threat of bioterrorism continues to evolve, and because the last full Bioterrorism Risk Assessment (BTRA) was issued in 2010, it will be important for DHS to demonstrate that any proposed upgrades or enhancement address the threat posed by the intentional release of select aerosolized biological agents at the time upgrades are considered. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Homeland Security, Health Human and Services, Defense, and Energy; and interested congressional committees. The report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Tim Persons at (202) 512-6412 or personst@gao.gov or Chris Currie at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The objectives of this report were to discuss: (1) the extent to which the Department of Homeland Security (DHS) has assessed the technical capability of the currently deployed system (Gen-2) to detect a biological attack; (2) the extent to which DHS adhered to best practices for developmental testing during Gen-3 Phase I, and what lessons can be learned; and (3) the most mature technology for an autonomous detection system, as well as what the potential benefits and likely challenges would be if DHS were to pursue an autonomous detection system for the BioWatch program in the near future. To determine the extent to which DHS has assessed the technical capability of the Gen-2 system to detect an attack, we reviewed and analyzed test reports and other agency and agency-commissioned documents containing information on the design, development, deployment, and technical performance characteristics of the system. We also reviewed reports of modeling and simulation studies, conducted by Department of Energy (DOE) national laboratories for DHS, that analyzed the performance and capabilities of the system. We interviewed DHS officials from the BioWatch Program Office and from the Science and Technology Directorate (S&T) who had knowledge of the history of the program, the Gen-2 technology and changes that had been made to the technology over time, and the tests and studies that had been conducted on the Gen-2 system’s technical capabilities. We also interviewed officials and researchers who conducted or were familiar with the tests and the modeling and simulation studies; these included officials and researchers at Dugway Proving Ground, Sandia National Laboratories, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory. In interviews with researchers who had conducted tests and studies, we questioned them about the scope and purposes of their work; the methods they had used; conclusions drawn, as well as any caveats on those conclusions; and the strengths and limitations of the tests and studies. We conducted a site visit to Dugway Proving Ground and saw facilities and equipment that had been used to test the Gen-2 system, as well as facilities under construction that could potentially be used for future testing of the BioWatch system. To assess the strengths and limitations of tests and studies of the Gen-2 system, we used (1) a framework for testing and evaluation of biodetection systems developed by the National Research Council, (2) leading practices in risk analysis and cost-benefit analysis, and (3) judgment of internal (GAO) and selected external experts in the fields of engineering, aerobiology, microbiology, and testing and evaluation of biodetection systems. To gather information on the field of biodetection and the strengths and limitations of alternative technologies, we attended two conferences on biodetection technologies. To determine whether DHS’s actions during Gen-3 Phase I adhered to best practices for developmental testing and to identify lessons that could be learned, we reviewed the best practices previously developed in conjunction with the National Academes to assess their appropriateness to our review. We consulted with GAO specialists familiar with the best practices for developmental testing to discuss their proper application. We determined the practices that could be applied to Gen-3 Phase I testing, as the testing was developmental in nature and presented opportunities for DHS to de-risk the Gen-3 acquisition, which is the intent of the practices—to de-risk acquisitions of binary threat detection technologies by the government. We analyzed Gen-3 Phase I acquisition and testing documents, such as the test and evaluation master plans, individual test plans and results, and the operational requirements documents. We analyzed other DHS documentation on lessons learned, including the Post Implementation Review assessment, in which DHS identified its own lessons learned on the Gen-3 acquisition. We reviewed the acquisition decision memorandum on the cancellation of the Gen-3 acquisition. We interviewed DHS officials in the BioWatch Program Office, the Office of the Director of Test and Evaluation at the Science and Technology Directorate, and officials at the national laboratories and Department of Defense (DOD) test agencies who were familiar with the testing performed during Gen-3 Phase I. We collected information from these officials on DHS’s actions and decisions during Phase I testing and compared that with the recommended actions outlined in the best practices for developmental testing. We also compared the steps outlined in the test planning documents with the recommended steps described in the best practices. We consulted with internal and external experts on our assessment of DHS’s actions and decisions compared with the best practices and acquisitions more broadly. We reviewed prior GAO reports on the Gen-3 acquisition and the biosurveillance enterprise. We also reviewed prior GAO work on other DHS acquisitions that met challenges during early phases of testing to draw comparisons with other DHS acquisitions that may have benefited from more robust testing guidance. To develop the best practices for developmental testing of binary threat detection systems, we conducted a 1-day meeting on June 4, 2013, with 12 experts we selected with assistance from the National Academies. These experts were from academia, industry, and the federal government and had experience in developmental testing methodologies, binary threat detection systems, automatic target recognition, and advanced imaging technologies, from fields that included homeland security, defense, and standards development. To identify the experts, the National Academies considered experts with previous experience on appropriate National Academy studies, requested suggestions from the members of the National Academies’ National Materials and Manufacturing Board and the Computer Sciences and Telecommunications Board, searched internal databases and the Web, and contacted other relevant individuals for recommendations. We facilitated the experts’ identification of best practices with pre-meeting interviews, structured questioning during the meeting, and post-meeting expert voting and ranking procedures. According to the experts, the best practices apply to the process of developmental testing of binary threat detection systems; they also apply if the system is commercial-off-the- shelf (COTS), modified COTS, or newly developed for a specific threat detection purpose being created by a vendor or the government. To identify the most mature technology for autonomous detection, we reviewed a report of a 2013 workshop conducted by the National Academies that assessed the state of technologies that are potentially suitable for autonomous detection for the BioWatch program. We also interviewed officials at the Centers for Disease Control and Prevention, the Department of Homeland Security’s Office of Health Affairs, Lawrence Livermore National Laboratory, and the Department of Defense who were familiar with BioWatch and biodetection technologies to gather their views on the state of autonomous detection technology. A conclusion of the National Academies workshop held in 2013 was that the polymerase chain reaction (PCR) was the most mature technology suitable for autonomous detection for BioWatch. As a check for any more recent developments that might affect this conclusion, we performed a literature review of journals and conference proceedings published since 2012 to identify any technologies potentially more mature than PCR based on the following criteria: 1. whether the detection technology is specified, meaning the technology is defined and not just referred to as biodetection or detection technology; 2. capacity to detect at least bacteria and virus; 6. having both indoor and outdoor performance capabilities in realistic 7. ability to detect independently (standalone); 8. technology readiness level (TRL) of 6 or higher, if reported; 9. sampled from aerosols/air; 10. whether the technology is used for disease surveillance or modeling instead of pathogen detection; and 11. whether the technology depends on, or is a variant of, PCR. We excluded press releases and news articles, studies that did not include sufficient detail for evaluating technological detection capability, technologies that were intended for non-aerosol detection (such as for food or clinical specimen testing), or technologies that were intended to be used alongside other technologies for detection (for example, used to supplement or verify a finding, or used as a trigger warning system). Our literature review was not intended to be a comprehensive examination of all technologies that might possibly be applied to BioWatch, but rather a supplement to the National Academies workshop report and a check to help ensure that the conclusions of that workshop were not affected by more recent developments in the field. To assess the potential benefits and likely challenges of autonomous detection, we analyzed reports published by the Sandia National Laboratories, as well as our prior work on the Gen-3 BioWatch system. We performed a literature review for models of how response timing to a positive detection of agent release may affect response effectiveness, in terms of lives saved. We searched for models published in the last 12 years, a range that was designed to cover work done following the anthrax attacks of 2001. We interviewed officials at the Centers for Disease Control and Prevention, the Department of Homeland Security’s Office of Health Affairs, and Lawrence Livermore National Laboratory to gather their views on the potential benefits and likely challenges of autonomous detection in the near future, which we defined as the next 5 years. Additionally, we reviewed Gen-3 BioWatch testing reports to identify likely challenges to autonomous detection systems. To determine the potential cost saving benefits of an autonomous detection system, we analyzed cost data provided by DHS. The agency provided annual operation and maintenance costs and total annual program costs for the current BioWatch system under the assumption that detection cycles would be increased to three per day (up from once per day, which is the current practice in most jurisdictions), as well as total annual costs for running a hypothetical autonomous detection system with six to eight detection cycles per day, with comparable coverage. The total annual cost for operating the current BioWatch system was calculated by dividing the annual operation and maintenance costs by three but keeping the remaining costs constant under the assumption that non-operation and maintenance costs remain the same. Our analysis of potential benefits and likely challenges represent key ones that were identified by the sources listed above, and is not intended to be comprehensive. In particular we did not assess or mention characteristics that were difficult or impossible to meaningfully discuss within the context of this report (for example, deterrent effects of a biodetection system, or finding qualified personnel to hire). For benefits, we focused primarily on reports published by Sandia National Laboratories because they focused most directly on the BioWatch program. To help collect and analyze information for all three of our research objectives—and to help ensure the technical accuracy of our work—we consulted with subject matter experts under contract with GAO in the fields of aerobiology, microbiology, and biodetection. We conducted this performance audit from December 2013 to October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In the Gen-2 system, if the polymerase chain reaction (PCR) assays used in both the screening step and the verification step yield positive results, suggesting the presence of a BioWatch threat agent, then a BioWatch Actionable Result (BAR) is declared. From the program’s inception in 2003 through 2014, there were 149 BARs. None was found to be associated with the release of a biothreat agent, and these BARs have been termed false positives by Centers for Disease Control and Prevention (CDC) officials and others. We found that all of the BARs from 2003 through 2014 were associated with PCR assays for two biothreat agents: Brucella and Francisella tularensis. The majority were associated with the assays for Francisella tularensis, and these have been attributed to detections of a non-disease- causing relative, or near-neighbor, of the Francisella tularensis bacterium that occurs naturally in the environment. Expert stakeholders told us that, before BioWatch, scientists had no reason or occasion to assess the presence or prevalence of these naturally occurring, non-disease-causing near-neighbors. Department of Homeland Security (DHS) officials and other stakeholders said several adjustments were made to the Gen-2 system in 2011 and 2012 to reduce the number of false positives. In August of 2011, a stricter criterion for deciding that PCR assays revealed the presence of biothreat agents was adopted. Between November of 2011 and December of 2012, BioWatch adopted new PCR assays for the screening step of analysis. Previously, assays developed by CDC for use in its Laboratory Response Network (LRN) had been used for both screening and verification. The new assays were from the Department of Defense’s (DOD) Critical Reagents Program (CRP) and were designed to look for different genetic signatures of the BioWatch threat agents. In general, assays designed to detect greater numbers of unique genetic signatures will provide greater specificity—that is, greater ability to distinguish between the agents of interest and other, genetically similar agents. In December of 2012, BioWatch adopted new PCR assays specifically intended to distinguish between disease-causing and non-disease- causing species of Francisella; under the new analysis protocol, these new assays are run in the verification step if the screening step returned a positive result for Francisella. Another adjustment to the system was not made in order to reduce the number of false positives but likely had this effect. In March, 2008, the PCR assays for Brucella were discontinued. According to BioWatch officials, this was because CDC had reclassified Brucella into a lower- threat category. Some of the BARs prior to March of 2008 were associated with the PCR assays for Brucella, and such BARs were no longer possible after this agent was discontinued as a BioWatch threat agent. The annual number of BARs decreased during the years when the adjustments designed to reduce false positives were made, and after the final adjustment (the adoption of the new Francisella assays) there were no BARs through 2014 (see fig. 6). This decrease is consistent with the possibility that the adjustments have provided greater specificity, as intended; however, there was large, unexplained variability in the annual numbers of BARs in earlier years, and we did not conduct an independent analysis to assess the extent to which the decrease since 2010 might be associated with the specific adjustments DHS made to the system. According to a recent report by the National Academies, there is no once- and-for-all solution to the problem of false positives for a biodetection system based on PCR assays. This is because biological agents continue to evolve, and new strains and near-neighbors continue to arise. Consequently, a biodetection system based on PCR assays, such as BioWatch, will likely require ongoing adjustments to manage or prevent false positives, and the National Academies recommended that the BioWatch program continue to test its assays against panels of near- neighbors as these panels are reviewed and updated over time. In 2013, in collaboration with the National Academies, we identified eight best practices for developmental testing of threat detection systems. According to the experts who deliberated on the best practices, the practices apply to the process of developmental testing of binary threat detection systems; they also apply if the system is commercial-off-the- shelf (COTS), modified COTS, or newly developed for a specific threat detection purpose being created by a vendor or the government. For additional information on the methods used to determine the best practices, see appendix I. The identified eight best practices for developmental testing of binary threat detection systems are described below, often using the context of the BioWatch program. According to experts, the level of government involvement in the development of a given system should be commensurate with the level of risk it is accepting. Risk needs to be assessed when the system is COTS, modified COTS, or a newly developed system, because even with commercial items, significant modifications may be needed. Experts also told us that relying solely on the vendor and holding the vendor responsible for any problem that arises is not consistent with the accountability and engagement required for acquisitions where the government is accepting significant risk. Further, an understanding of the technical risk associated with the development of a given system is important, or in the case of the purchase of a commercial item, understanding the modifications needed to an existing system to accommodate government-specific needs, is important. Design and developmental testing teams need to understand the needs, concerns, and capabilities of the user community or they run the risk of designing and testing a system that, in the case of autonomous biological detection systems, (1) operators may have difficulty operating, or (2) decision makers may have difficulty with when interpreting results. According to experts, the user community may have suggestions that could improve the system or make the developmental tests more realistic. Distinct from subject matter experts that monitor developmental testing, these representatives are integral parts of the design and developmental testing teams. The role of these team members is to make sure that the needs, concerns, and capabilities of the user community are considered throughout design and developmental testing efforts. According to experts, to take a systems engineering view of a system, the tester must understand the boundaries of what it is being tested prior to developmental testing. For example, would DHS plan to test just the assays or analytical components of the Gen-3 candidate systems or would it plan to test the whole end-to-end system (i.e., collection, extraction, analysis, and communication of result), and was that plan communicated in the test and evaluation master plan (TEMP)? This is critical, since different system boundaries impose different testing methods and constraints. According to experts, use of statistical experimental design methodology ensures that a test has been designed with a clear understanding of goals and acceptable limitations, that the test is clearly documented, and that the test results are rigorously analyzed. Experts said statistical experimental design is the tool used to define the test goals, limitations, and procedures, and establishes a detailed plan for conducting the experiment. Further, experts stated that the creation and use of an appropriate model against which system performance can be evaluated is fundamentally important when establishing the statistical experimental design. According to experts, the system’s operational objectives and user’s needs should be identified before designing the experiment, and uncertainties should be characterized and reported with all system performance estimates. Experts told us that well-chosen statistical experimental designs maximize the amount of information that can be obtained for a given amount of experimental effort. According to experts, binary threat detection systems have an established body of statistically based methods and procedures used to evaluate and characterize them. Further, experts stated it is important to use certain objective metrics to characterize system performance. According to experts, one way to improve resilience is to uncover vulnerabilities as early as possible through rigorous and comprehensive testing of the system against various scenarios. For example, the BioWatch system operates in a number of different environmental settings with varying contaminants. Settings vary from warm to cold climates, dry to wet climates, and indoor and outdoor settings. The BioWatch sensors might be exposed to dust, metallic dust, smoke, and diesel exhaust in indoor environments, as well as to rain, fog, snow, ice, wind, salt spray, sand, and pollen in outdoor environments. To the extent possible, these potential contaminants should be part of the testing of a detection system to help identify vulnerabilities to performance in these environments. According to experts, the further the system moves down the development path, the more fixed the design becomes. Thus, when the developmental testing team uncovers an error (i.e., the system failed a test), it is increasingly expensive to fix. Any time there is a change in the design, everything that worked before needs to be re-tested to make sure the change did not undo something that already has been shown to work. Therefore, according to experts, agencies should focus on building in resilience during early and intermediate developmental testing so as to minimize the number of hidden failures found in the later stages of testing. According to experts, developmental testing should be viewed as a critical tool in helping to refine performance requirements. Experts told us that a meaningful performance requirement is one that not only is achievable but also strives to maximize the fulfillment of a mission need—which in the case of BioWatch might be the number of lives saved. While the minimum required performance thresholds may be achievable, they may fall short of the maximum achievable performance. Experts informed us that the maximum achievable performance can be uncovered only by understanding what the system is actually capable of doing through comprehensive developmental testing that unrestrainedly explores the performance boundaries of the system. Experts told us that it is important to consider developmental testing and operational testing as a continuum by defining developmental testing broadly to cover, for example, operational test activities that traditionally have been viewed as post-development, rather than artificially limiting the development of a system to a fixed stage. Experts also said it is important to use lessons learned on preceding tests to improve the probability of success (proper system performance) on following tests and to use lessons learned from test failures as feedback into the design process to continuously improve system performance. In addition to the individuals named above, Edward George (Assistant Director), Sushil Sharma (Assistant Director), Russ Burnett, Kendall Childers, Eric Hauswirth, Hayden Huang, Susanna Kuebler, Jack Melling, Jeff Mohr, Rebecca Shea, and Katherine Trimble made key contributions to this report.
DHS's BioWatch program aims to provide early indication of an aerosolized biological weapon attack. Until April 2014, DHS pursued a next-generation autonomous detection technology (Gen-3), which aimed to enable collection and analysis of air samples in less than 6 hours, unlike the current system (Gen-2), which requires manual intervention and can take up to 36 hours to detect the presence of biological pathogens. DHS is taking steps to address the capability gap that resulted from the cancellation of Gen-3 by exploring other technology upgrades and improvements to the Gen-2 system. GAO was asked to review (1) the technical capabilities of the currently deployed BioWatch system, (2) the Gen-3 testing effort, and (3) characteristics of autonomous detection as a possible option to replace the current BioWatch system. GAO analyzed key program documents, including test plans, test results, and modeling studies. GAO assessed Gen-3 testing against best practices, reviewed relevant literature, and discussed the BioWatch program and testing efforts with key agency officials and national laboratories staff. The Department of Homeland Security (DHS) lacks reliable information about BioWatch Gen-2's technical capabilities to detect a biological attack and therefore lacks the basis for informed cost-benefit decisions about upgrades to the system. DHS commissioned several tests of the technical performance characteristics of the current BioWatch Gen-2 system, but has not developed performance requirements that would enable it to interpret the test results and draw conclusions about the system's ability to detect attacks. Although DHS officials said that the system can detect catastrophic attacks, which they define as attacks large enough to cause 10,000 casualties, they have not specified the performance requirements necessary to reliably meet this operational objective. In the absence of performance requirements, DHS officials said computer modeling and simulation studies support their assertion. However, none of these studies were designed to incorporate test results from the Gen-2 system and comprehensively assess the system against the stated operational objective. Additionally, DHS has not prepared an analysis that combines the modeling and simulation studies with the specific Gen-2 test results to assess the system's capabilities to detect attacks. Finally, we found limitations and uncertainties in the four key tests of the Gen-2 system's performance. Because it is not possible to test the BioWatch system directly by releasing live biothreat agents into the air in operational environments, DHS relied on chamber testing and the use of simulated biothreat agents, which limit the applicability of the results. These limitations underscore the need for a full accounting of statistical and other uncertainties, without which decision makers lack a full understanding of the Gen-2 system's capability to detect attacks of defined types and sizes and cannot make informed decisions about the value of proposed upgrades. The actions and decisions DHS made regarding the acquisition and testing of a proposed next generation of BioWatch (Gen-3) partially aligned with best practices GAO previously identified for developmental testing of threat detection systems. For example, best practices indicate that resilience testing, or testing for vulnerabilities, can help uncover problems early. While DHS took steps to help build resilience into the Gen-3 testing, future testing could be improved by using more rigorous methods to help predict performance in different operational environments. DHS canceled the Gen-3 acquisition in April 2014, but GAO identified lessons DHS could learn by applying these best practices to the proposed Gen-2 upgrades. According to experts and practitioners, the polymerase chain reaction (PCR), which detects genetic signatures of biothreat agents, is the most mature technology to use for an autonomous detection system. DHS is considering autonomous detection as an upgrade to Gen-2, because according to DHS, it may provide benefits such as reduction in casualties or clean-up costs. But the extent of these benefits is uncertain because of several assumptions, such as the speed of response after a detection, that are largely outside of DHS's control. As a result, the effectiveness of the response—and the number of lives that could be saved—is uncertain. Further, an autonomous detection system must address several likely challenges, including minimizing possible false positive readings, meeting sensitivity requirements, and securing information technology networks. GAO recommends DHS not pursue upgrades or enhancements for Gen-2 until it reliably establishes the system's current capabilities. GAO also recommends DHS incorporate best practices for testing in conducting any system upgrades. DHS generally concurred with GAO's recommendations. or Chris Currie at (404) 679-1875 or curriec@gao.gov .
In recent years, DOD has taken steps to improve its processes for acquiring and sustaining weapon systems. As part of these improvements, program managers are now responsible for the total life-cycle management of a weapon system, to include the sustainment of the system. In addition, DOD has directed weapon system program managers to develop acquisition strategies that maximize competition, innovation, and interoperability and the use of commercial, rather than military-unique, items to reduce costs. Within the area of weapon system sustainment, DOD is pursuing the use of performance-based logistics as the preferred support strategy for its weapon systems. Performance-based logistics, a variation on other contractor logistics support strategies calling for the long-term support of weapon systems, involves defining a level of performance that the weapon system is to achieve over a period of time at a fixed cost to the government. Technical data rights can affect DOD’s plans to sustain weapon systems throughout their life cycle. For example, DOD would need technical data rights if it opts to reduce spare parts costs by developing new sources of supply, to meet wartime surge requirements by contracting with additional equipment suppliers, or to reduce system acquisition costs by recompeting follow-on procurements of the equipment. In addition, DOD may need to develop depot-level maintenance capabilities for its weapon systems at government depots in order to meet legislative requirements. DOD is required under 10 U.S.C. 2464 to identify and maintain within government- owned and government-operated facilities a core logistics capability, including the equipment, personnel, and technical competence identified as necessary for national defense emergencies and contingencies. Under 10 U.S.C. 2466, not more than 50 percent of the funds made available in a fiscal year to a military department or defense agency for depot-level repair and maintenance can be used to contract for performance by nonfederal personnel. These provisions can limit the amount of depot-level maintenance that can be performed by contractors. Finally, DOD would also need technical data rights for a weapon system should a contractor fail to perform, including a contractor working under a performance-based logistics arrangement. DOD has referred to this contingency as an “exit strategy.” The Army and the Air Force have encountered limitations in their sustainment plans for some fielded weapon systems because they lacked needed technical data rights. The lack of technical data rights has limited the services’ flexibility to make changes to sustainment plans that are aimed at achieving cost savings and meeting legislative requirements regarding depot maintenance capabilities. During our review we identified seven weapon system programs where these military services encountered limitations in their sustainment plans. Although the circumstances surrounding each case were unique, earlier decisions made on technical data rights during system acquisition were cited as a primary reason for the limitations subsequently encountered. As a result, the services had to alter their plans for developing maintenance capability at public depots, new sources of supply to increase production, or competitive offers for the acquisition of spare parts and components to reduce sustainment costs. In at least three of the cases, the military service made attempts to obtain needed technical data subsequent to the system acquisition but found that the equipment manufacturer declined to provide the data or that acquiring the data would be too expensive. We did not assess the rationale for the decisions made on technical data rights during the acquisition of these systems. The seven weapon system programs we identified where a lack of technical data rights affected the implementation of sustainment plans are summarized below: C-17 aircraft: When the Air Force began acquisition of the C-17 aircraft, it did not acquire technical data rights needed to support maintenance of the aircraft at public depots. According to a program official, the Air Force did not consider the aircraft’s depot maintenance workload as necessary to support DOD’s depot maintenance core capability. Subsequently, however, the Air Force’s 2001 depot maintenance core assessment identified the C-17 aircraft workload as necessary to support core capability. The Air Force determined that it did not have the technical data rights needed to perform the required maintenance. According to C-17 program officials, the C-17 prime contractor did not acquire data rights for C-17 components provided by sub-vendors and consequently was not able to provide the needed data rights to the Air Force. The prime contractor has encouraged its sub-vendors to cooperate with the Air Force in establishing partnerships to accomplish the needed core depot maintenance. Under these partnerships, Air Force depots would provide the facilities and labor needed to perform the core depot maintenance work, and the sub-vendor would provide the required technical data. According to Air Force officials, there are some instances where the sub-vendor is unwilling to provide the needed technical data. For example, in the case of the C-17’s inertial navigation unit, the sub-vendor maintains that the inertial navigation unit is a commercial derivative item and the technical data needed to repair the item are proprietary. As of April 18, 2006, the sub-vendor was declining to provide the technical data needed to the Air Force. Without the rights to the technical data or a partnership with the sub-vendor, the Air Force cannot develop a core maintenance capability for this equipment item. F-22 aircraft: The acquisition of the Air Force’s F-22 aircraft did not include all of the technical data needed for establishing required core capability workload at Air Force depots. Early in the F-22 aircraft’s acquisition, the Air Force planned to use contractors to provide needed depot-level maintenance and therefore decided not to acquire some technical data rights from sub-vendors in order to reduce the aircraft’s acquisition cost. Subsequently, however, the Air Force determined that portions of the F-22 workload were needed to satisfy core depot maintenance requirements. The Air Force is currently negotiating contracts for the technical data rights needed to develop depot-level maintenance capability. While the Air Force has negotiated contracts to acquire technical data for four F-22 aircraft components, F-22 program officials expressed concern that it may become difficult to successfully negotiate rights to all components. C-130J aircraft: The Air Force purchased the C-130J aircraft as a commercial item and, as such, did not obtain technical data rights needed to competitively purchase C-130J-unique spare parts and components or to perform depot-level maintenance core workload. The C-130J shares many common components with earlier versions of the C-130 for which the government has established DOD and contractor repair sources. In 2004, the DOD Inspector General reported that the Air Force’s use of a commercial item acquisition strategy was unjustified to acquire the C-130J aircraft. In response to the Inspector General’s findings and added congressional interest, the Air Force is converting its C-130J acquisition to traditional defense system acquisition and sustainment contracts. However, because the Air Force did not acquire the necessary technical data during the acquisition process, it has less leverage to negotiate rights to data. In 2005, the Air Force approached the aircraft manufacturer to purchase technical data rights for C-130J-unique components, but the aircraft manufacturer declined to sell the data rights. Because of its lack of the needed technical data, the Air Force is planning to establish partnerships with C-130J sub-vendors that have technical data rights to components of the C-130J. Under these partnerships, Air Force depots would provide the facilities and labor needed to perform the core depot maintenance work, and the sub-vendors would provide the required technical data. C-130J program officials expressed concerns that in some instances sub- vendors may not be willing to partner. The Air Force currently expects to develop approximately 90 partnerships with as many different vendors on approximately 300 C-130J core candidate components. Program officials expressed concern about the proliferation of partnerships and said the Air Force will incur additional costs to develop, manage, and monitor these partnerships, but they had not determined what these costs will be. Up-armored High-Mobility Multipurpose Wheeled Vehicles: When the Army first developed the up-armored HMMWV in 1993, it did not purchase the technical data necessary to develop new sources of supply to increase production. Army officials anticipated fielding these vehicles to a limited number of Army units for reconnaissance and peacekeeping purposes. At that time, the Army did not obtain technical data required for the manufacture of up-armor HMMWVs. With the increasing threat of improvised explosive devices during operations in Iraq, demand for up-armored HMMWVs increased substantially, from 1,407 vehicles in August 2003 to 8,105 vehicles by September 2004. According to Army officials, the manufacturer declined to sell the rights to the technical data package. Because of the lack of technical data rights to produce up-armored HMMWVs, program officials explained they were unable to rapidly contract with alternate suppliers to meet the wartime surge requirement. Stryker family of vehicles: When acquiring the Stryker, the Army did not obtain technical data rights needed to develop competitive offers for the acquisition of spare parts and components. Following the initial acquisition, the program office analyzed alternatives to the interim contractor support strategy for the weapon system and attempted to acquire rights to the manufacturer’s technical data package. The technical data package describes the parts and equipment in sufficient technical detail to allow the Army to use competition to lower the cost of parts. The contractor declined to sell the Stryker’s technical data package to the Army. Further, according to an Army Audit Agency report, the project office stated that the cost of the technical data, even if available, would most likely be prohibitively expensive at this point in the Stryker’s fielding and would likely offset any cost savings resulting from competition. Airborne Warning and Control System aircraft: The Air Force lacked technical data for the AWACS needed to develop competitive offers for the purchase of certain spare parts. When the Air Force recently purchased cowlings (metal engine coverings) for the AWACS, it did so on a noncompetitive, sole-source basis. The Defense Contract Management Agency recommended that the cowlings be competed because the original equipment manufacturer’s proposed price was not fair and reasonable and because another potential source for the part was available. Despite the recommendation, however, the Air Force said it lacked the technical data to compete the purchase. We noted that while the Air Force and the original equipment supplier have a contract that could allow the Air Force to order technical drawings for the purpose of purchasing replenishment spare parts, the contractor had not always delivered such data based on uncertainties concerning the Air Force’s rights to the data. M4 carbine: When the Army purchased its new M4 carbine, it did not acquire the technical data rights necessary to recompete follow-on purchases of the carbine. The M4 carbine is a derivative of the M16 rifle and shares 80 percent of its parts with the M16. However, because the remaining 20 percent of parts were funded by the developer, the Army did not have all the rights needed to compete subsequent manufacture of the M4. The Army estimated that the unit cost is about twice as much for the M4 compared with the M16, despite increases in procurement quantities for the M4 and the large commonality of parts. According to Army officials, having the technical data rights for the M16 allowed the Army to recompete the procurement of the rifle, resulting in a significantly decreased unit procurement cost. Although we did not assess the rationale for the decisions made on technical data rights during the acquisition of these systems, several factors may complicate program managers’ decisions on long-term technical data rights for weapon systems. These factors include the following: The contractor’s interests in protecting its intellectual property rights. Because contractors need to protect their intellectual property from uncompensated use, they often resist including contract clauses that provide technical data rights to the government. The extent to which the system being acquired incorporates technology that was not developed with government funding. According to DOD’s acquisition guidance, the government’s funding of weapon system development determines the government’s rights to technical data. Weapon systems are frequently developed with some mix of contractor and government funding, which may present challenges to DOD in negotiating technical data rights with the contractor. The potential for changes in the technical data over the weapon system’s life cycle. The technical data for a weapon system may change over its life cycle, first as the system’s technology matures and later as the system undergoes modifications and upgrades to incorporate new technologies and capabilities. The potential for changes in technical data present challenges concerning when the government should take delivery of technical data, the format used to maintain technical data, and whether the data should be retained in a government or contractor repository. The extent to which the long-term sustainment strategy may require rights to technical data versus access to the data. According to Army officials, access to contractor technical data is sometimes presented as an alternative to the government taking delivery of the data. These officials noted that while access to technical data may allow for oversight of the contractor and may reduce the program manager’s data management costs, it may not provide the government with rights to use the technical data should a change in the sustainment plan become necessary. The numerous funding and capability trade-offs program managers face during the acquisition of a weapon system. Program managers are frequently under pressure to spend limited acquisition dollars on increased weapon system capability or increased numbers of systems, rather than pursuing technical data rights. The long life cycle of many weapon systems. With weapon systems staying in DOD’s inventory for longer periods—up to 40 years, it may be difficult for the program manager to plan for future contingencies such as modifications and upgrades, spare parts obsolescence, diminishing manufacturing support, and diminishing maintenance support. DOD’s acquisition policies do not specifically address long-term needs for technical data rights to sustain weapon systems over their life cycle, and in the absence of a DOD-wide policy, the Army and the Air Force are working independently to develop structured approaches for defining technical data requirements and securing rights to those data. DOD’s current acquisition policies do not specifically require program managers to assess long-term needs for technical data rights to support weapon systems and, correspondingly, to develop acquisition strategies that address those needs. DOD guidance and policy changes, as part of the department’s acquisition reforms and performance-based strategies, have deemphasized the acquisition of technical data rights. DOD concurred with but has not implemented our August 2004 recommendation for developing technical data acquisition strategies, although it has recently reiterated its intent to do so. Army and Air Force logistics officials are working independently to develop structured approaches for determining technical data rights requirements and securing long-term rights for use of those data. Logistics officials told us that their efforts would benefit from having a DOD policy that specifically addresses long-term technical data needs for weapon system sustainment. Current DOD acquisition policies do not specifically require program managers to assess long-term needs for technical data rights to sustain weapon systems, and, correspondingly, to develop acquisition strategies that address those needs. DOD Directive 5000.1, the agency’s policy underlying the defense acquisition framework, designates program managers as the persons with responsibility and authority for accomplishing acquisition program objectives for development, production, and sustainment to meet the users’ operational needs. The directive, however, does not provide specific guidance as to what factors program managers should consider in developing a strategy to sustain the weapon system, including considerations regarding technical data. DOD Instruction 5000.2, the agency’s policy for implementing DOD Directive 5000.1, requires program managers to ensure the development of a flexible strategy to sustain a program so that the strategy may evolve throughout the weapon system’s life cycle. In addition, DOD provides non-mandatory guidebooks to assist program managers with acquisition and product support. However, DOD acquisition policy does not specifically direct the program manager, when acquiring a weapon system, to define the government’s requirements for technical data rights, an important aspect of a flexible sustainment strategy. DOD guidance and policy changes, as part of the department’s acquisition reforms and performance-based strategies, have deemphasized the acquisition of technical data rights. For example, a 2001 memorandum signed by DOD’s senior acquisition official stated that the use of performance-based acquisition strategies may obviate the need for data or rights. Also in 2001, DOD issued guidance on negotiating intellectual property rights and stated that program officials should seek to establish performance-based requirements that enhance long-term competitive interests, in lieu of acquiring detailed design data and data rights. In a May 2003 revision of its acquisition policy, DOD eliminated a requirement for program managers to provide for long-term access to technical data and required them to develop performance-based logistics strategies. Even prior to the May 2003 revision of DOD’s acquisition policy, we had raised concerns about whether DOD placed sufficient emphasis on obtaining technical data during the acquisition process. We reported in 2002 that DOD program offices had often failed to place adequate emphasis on obtaining needed technical data during the acquisition process. We recommended that DOD emphasize the importance of obtaining technical data and consider including a priced option for the purchase of technical data when considering proposals for new weapon systems or modifications to existing systems. While DOD concurred with the recommendation, it subsequently made revisions to its acquisition policies in May 2003, as noted above, that eliminated the prior requirement for the program manager to provide for long-term access to data. DOD also has not implemented a prior recommendation we made for developing technical data acquisition strategies, although it has recently reiterated its intent to do so. In August 2004, we reported that adoption of performance-based logistics at the weapon system platform level may be influencing program managers to provide for access only to technical data necessary to manage the performance-based contract during the acquisition phase—and not to provide a strategy for the future delivery of technical data in case the performance-based arrangement failed. We recommended that DOD consider requiring program offices to develop acquisition strategies that provide for a future delivery of sufficient technical data should the need arise to select an alternative source or to offer the work out for competition. In response to our recommendation, DOD concurred that technical, product, and logistics data should be acquired by the program manager to support the development, production, operation, sustainment, improvement, demilitarization, and disposal of a weapon system. Furthermore, the department recognized the need to take steps to stress the importance of technical data by its stated intent to include a requirement in DOD’s acquisition policies (DOD Directive 5000.1 and DOD Instruction 5000.2) for the program managers to establish a data management strategy that requires access to the minimum data necessary to sustain the fielded system; to recompete or reconstitute sustainment, if necessary, to promote real time access to data; and to provide for the availability of high-quality data at the point of need for the intended user. In the case of performance-based arrangements, that would include acquiring the appropriate technical data needed to support an exit strategy should the arrangement fail or become too expensive. Despite DOD’s concurrence with our recommendation, however, efforts to implement these changes have been delayed. Army and Air Force logistics officials are independently developing structured approaches for determining when and how in the acquisition process the service should assess its requirements for technical data and secure its long-term rights for use of those data. The aim of these efforts is to ensure future sustainment needs of weapon systems are adequately considered and supported early during the acquisition process. Logistics officials from each service told us that their efforts would benefit from having a DOD policy that specifically addresses long-term technical data needs for weapon system sustainment. In the absence of a mandatory DOD requirement to address technical data, service officials said, program managers may not fully consider and incorporate long-term requirements for technical data rights during system acquisition. According to Army and Air Force officials, their reviews of current policies and practices indicate that it is during the development of the solicitation and the subsequent negotiation of a proposed contract that the government is in the best position to secure required technical data rights. This point in the acquisition process is likely to present the greatest degree of competitive pressure, and the weapon system program office can consider technical data as a criterion for evaluating proposals and selecting a contractor. In addition, the Air Force is pursuing the use of priced options negotiated in contracts for new weapon systems or modifications to existing systems. A priced option retains the option for acquiring technical data rights at some point later in the weapon system’s life cycle. According to Air Force officials, priced options for technical data may ensure the government’s rights to the data and control the cost of technical data in the future. The Air Force is attempting to incorporate priced options for technical data in two new weapon system acquisitions. We have previously recommended that DOD require the military services to consider the merits of including a priced option for the purchase of technical data when proposals for new weapon systems or modifications to existing systems are being considered. The Army established a working group in March 2005 to serve as a forum for determining requirements for and resolving issues associated with the management and use of technical data. One task of the working group is to develop a structured process for determining what technical data are needed for any given system. Another task is to clarify technical data policy and reconcile the best practices of acquisition reform with the need for technical data rights in support of weapon system acquisition and sustainment. The group is also reviewing pertinent federal and DOD policy and guidance, as well as instruction materials used by the Defense Acquisition University for acquisition career training, with the aim of identifying ambiguities or inconsistencies. This effort focuses on areas of the acquisition process where technical data and acquisition intersect, such as systems engineering, configuration management, data management, contracting, logistics, and financial management. Some anticipated products from the group include the following proposed items: changes to integrate and clarify policy on technical data and weapon draft instruction material to better define and explain the value of technical data rights and the uses of technical data throughout the weapon system life cycle, and a comprehensive primer to provide the acquisition professional a guide for ensuring that there is a contract link between weapon system acquisition and sustainment strategies on the one hand and the technical data strategy on the other. According to members of the working group, if the government’s rights have not been protected in the contract, then it may be necessary to negotiate the rights to use the data at a later date, which could be cost- prohibitive. Army Materiel Command officials told us that having a DOD policy on when and how in the acquisition process technical data rights should be addressed would help them as they revise their policy and guidance. The product data working group plans to complete its preliminary work by the end of fiscal year 2006. In January 2006, the U.S. Army’s Tank-automotive and Armaments Command completed a study evaluating the importance of technical data over the life cycle of a weapon system, with particular emphasis on sustainment. While the Army had not yet approved and released the final report, members of the study team indicated the following: Previous DOD guidance on the data rights required for performance- based logistics contracts has been ambiguous and open to misinterpretation. This ambiguity has resulted in many programs’ not acquiring rights to technical data for long-term weapon system sustainment. Lack of technical data rights leads to risks associated with the inability to broaden the industrial base to support Global War on Terrorism surge requirements. The current process to identify the government’s technical data rights is ad hoc and unstructured. The government’s rights to technical data are independent of the logistics support strategy—whether government (organic) support, traditional contract logistics support, or performance-based logistics. According to team members, potential recommendations from the study are to establish a new policy requiring the program manager to complete a technical data rights decision matrix and to weigh the cost of acquiring technical data against program risk. The technical data rights should be negotiated as early as possible in the contracting process and ideally should be used as a source selection factor. The study team further states that the government should ensure that rights to use the data are secured in the system development and demonstration contract. Air Force officials are currently reviewing and developing proposed changes to weapon system acquisition and support policies to require that sustainment support and technical data rights decisions be made early in weapon system acquisition. These efforts are part of the Air Force Materiel Command’s product support campaign, an effort to better integrate the activities of the service’s acquisition and logistics communities. Air Force officials involved with the campaign said their efforts could be facilitated if DOD’s acquisition policy were revised to more clearly direct program managers when and how they are to define and secure the government’s data rights during weapon system acquisition. The campaign’s policy focus team is working on the efforts that would provide a more structured approach to early determination of the government’s technical data rights: revised polices to require that sustainment support decisions be made and technical data rights be defined during the technology phase of acquisition but prior to system development and demonstration; a standard template for contract solicitations, to be used to guide the acquisition workforce in securing technical data rights; contract language to include a priced option for the delivery of technical data and rights for use of data, which would be negotiated and included as part of the system development and demonstration solicitation; and an independent logistics assessment process, to provide an objective review of the acquisition program office’s sustainment support plans before major milestone decisions. In May 2006, the Secretary of the Air Force directed that the acquisition of technical data and associated rights be addressed specifically in all acquisition strategy plans, reviews, and associated planning documents for major weapon system programs and subsequent source selections. The Secretary stated these actions are needed to address challenges in meeting legislative requirements to maintain a core logistics capability and to limit the percentage of depot maintenance funds expended for contractor performance. The competitive source selection process, according to the Secretary, provides the best opportunity to address technical data requirements while at the same time brokering the best deal for the government in regard to future weapon systems sustainment. Under current DOD acquisition policies, the military services lack assurance that they will have the technical data rights needed to sustain weapon systems throughout their life cycle. We have previously made recommendations that DOD enhance its policies regarding technical data. DOD has concurred with these recommendations but has not implemented them. In fact, DOD has de-emphasized the acquisition of technical data rights as part of the department’s acquisition reforms and performance- based strategies. Our current work, however, shows that the services face limitations in their sustainment plans for some fielded weapon systems due to a lack of needed technical data rights. Furthermore, program managers face numerous challenges in making decisions on technical data rights— decisions that have long-term implications for the life-cycle sustainment of weapon systems. Army and Air Force logistics officials have recognized weaknesses in their approaches to assessing and securing technical data rights, and each service has begun to address these weaknesses by developing more structured approaches. However, current DOD acquisition policies do not facilitate these efforts. Unless DOD assesses and secures its rights for the use of technical data early in the weapon system acquisition process when it has the greatest leverage to negotiate, DOD may face later challenges in developing sustainment plans or changing these plans as necessary over the life cycle of its weapon systems. Delaying action in acquiring technical data rights can make these data cost- prohibitive or difficult to obtain later in the weapon system life cycle, and can impede DOD’s ability to comply with legislative requirements, such as core capability requirements. To ensure that DOD can support sustainment plans for weapon systems throughout their life cycle, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to specifically require program managers to assess long-term technical data needs and establish corresponding acquisition strategies that provide for technical data rights needed to sustain weapon systems over their life cycle. These assessments and corresponding acquisition strategies should be developed prior to issuance of the contract solicitation; address the merits of including a priced contract option for the future delivery of technical data; address the potential for changes in the sustainment plan over the weapon system’s life cycle, which may include the development of maintenance capability at public depots, the development of new sources of supply to increase production, or the solicitation of competitive offers for the acquisition of spare parts and components; and apply to weapon systems that are to be supported by performance- based logistics arrangements as well as to weapon systems that are to be supported by other sustainment approaches. We also recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to incorporate these policy changes into DOD Directive 5000.1 and DOD Instruction 5000.2 when they are next updated. In commenting on a draft of this report, DOD concurred with our report and recommendations. DOD stated that the requirement for program managers to assess long-term technical data needs and establish corresponding strategies will be incorporated into DOD Instruction 5000.2 when it is next updated. If DOD updates its acquisition policy as stated, we believe this action will meet the intent of our recommendations. DOD’s response is included in appendix II. We conducted work at the Office of the Secretary of Defense, the Army, the Navy, and the Air Force. The specific offices and commands we visited are listed in the attached briefing slides contained in appendix I. To identify sustainment plans for fielded weapon systems that may have been affected by the technical data rights available to the government, we met with Army and Air Force acquisition and logistics officials responsible for 11 weapon systems. We did not identify technical data issues affecting sustainment for three of these systems and excluded these systems from our subsequent review. We also excluded a weapon system—the Buffalo mine-protected route clearing equipment—that was acquired under the Army’s rapid fielding initiative to meet emergency needs. For the other seven weapon system programs—the C-17 aircraft, F-22 aircraft, C-130J aircraft, Up-armored High-Mobility Multipurpose Wheeled Vehicle, Stryker family of vehicles, Airborne Warning and Control System aircraft, and M4 carbine—we obtained information on the service’s requirement for rights to use the data, their success in obtaining data rights from the manufacturer, and the effect that a lack of data rights had on system sustainment plans. We did not assess the rationale for the decisions made on technical data rights during system acquisition, nor did we determine the extent that program offices complied with acquisition policies regarding technical data that existed at the time of the acquisition. However, we collected comments from acquisition and logistics personnel on the factors that complicate program managers’ decisions on long-term technical data rights for weapon systems. To examine the requirements for obtaining technical data rights under current DOD acquisition policies, we analyzed current DOD acquisition policies. Our review encompassed DOD-wide policies, including DOD Directive 5000.1 and DOD Instruction 5000.2, as well as service-specific policies. We discussed these policies with DOD and service officials responsible for developing acquisition and logistics policies, preparing system acquisition strategies, and implementing sustainment plans to obtain their views on the importance of considering technical data requirements during the acquisition process. To determine DOD’s plans to revise acquisition policy in response to a previous recommendation we made on technical data, we reviewed DOD correspondence and met with officials at the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics). We also met with Army and Air Force logistics officials to obtain information on their efforts to assess acquisition policies and make appropriate changes that would provide a structured process for assessing and securing government rights to technical data early in weapon system acquisition. We interviewed Army officials leading the Army Material Command’s Product Data Engineering Working Group and Air Force officials addressing acquisition and logistics policies as part of the Air Force Material Command’s Product Support Campaign. We also reviewed available documentation on the objectives and potential outcomes of these initiatives. We are sending copies of this report to the Secretary of Defense and to the Secretaries of the military services. Copies of this report will be made available to others upon request. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5140 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. the services from purchasing technical data when acquiring new weapons systems, and thereby increase the systems’ life-cycle sustainment costs and delay the repair of mission-essential items. The Committee requested in H.R. 109-89 that GAO review the services’ technical data policies and practices affecting life-cycle costs and availability of weapons systems. GAO previously examined this issue in a report on performance-based logistics (GAO-04-715, August 2004) and recommended policy changes for DOD. Specifically, DOD should: “consider requiring program offices, during weapon system acquisition, to develop acquisition strategies that provide for a future delivery of sufficient technical data to enable the program office to select an alternative source—public or private—or to offer the work out for competition if the performance-based arrangement fails or becomes prohibitively expensive.” GAO will analyze and report on the following: To what extent DOD policies limit the services from purchasing technical data when acquiring new weapons systems; The status of DOD’s plans to revise acquisition policy in response to the previous GAO recommendation on technical data; The costs for obtaining access in order to view, modify, or distribute technical data relating to the sustainment of procured systems; and The amount of time required to reach back to the system manufacturer for technical data and what impact, if any, that delay has on repairing or modifying fielded systems. data rights during acquisition– but neither are they required to do so. DOD has been slow in responding to our recommendation in GAO-04-715 but has revised discretionary guidance that addresses tech data rights. Available budget figures indicate limited cost growth for technical data supporting fielded weapons systems. Although information on technical data issues affecting field maintenance is limited, our review showed maintenance units in Southwest Asia did not experience systemic problems in obtaining tech data. Army readiness data we reviewed did not indicate low equipment readiness rates due to maintenance. Army Materiel Command (AMC), Fort Belvoir, Va. Tank-Automotive and Armaments Command (TACOM), Warren, Michigan, and Rock Island, Ill. U.S. Army Forces Command (FORSCOM), Fort McPherson, Ga. 1st Cavalry Division, Fort Hood, Texas Naval Aviation Systems Command (NAVAIR), Patuxent River, Md. Naval Sea Systems Command (NAVSEA), Washington, D.C. our prior report, we analyzed DOD and service acquisition policy and guidance, interviewed DOD and service officials, and collected DOD correspondence addressing our recommendation. To determine costs for obtaining access to contractor technical data, we collected and analyzed operations and maintenance budget data from the services. Our review focused on costs associated with sustaining fielded systems. To determine the impact of maintenance delays that might be caused by the need to obtain technical data from the manufacturer, we collected and analyzed non-mission capable reports on equipment managed by the US Army Tank- Automotive and Armaments Command. We also contacted logistics assistance representatives to collect information on experiences with technical data in the field. We also collected acquisition and logistic support information on 7 Army systems—Abrams, Bradley, HMMWV, M-88, Stryker, mine clearing equipment, and vehicle armor kits; and 4 Air Force systems—C-17, C-130J, F-117, and the F22. WHAT ARE TECHNICAL DATA? Technical data are recorded information of a scientific or technical nature, regardless of the form or method of the recording. Typically, technical data refer to: Technical data packages: All applicable drawings, associated lists, specifications, standards, performance requirements, quality assurance provisions, and packaging details necessary to support an acquisition strategy and ensure the adequacy of item performance. Technical manuals: Publications that contain instructions for the installation, operation, maintenance, training, and support of weapons systems. A maintenance technical manual normally includes maintenance procedures, parts lists or parts breakdown, and related technical information or processes. WHY ARE TECHNICAL DATA IMPORTANT? The private sector and the government often have important competing interests in technical data rights. A company’s interest in protecting its intellectual property (IP) from uncompensated exploitation is of paramount importance—such that they often resist including the contract clauses providing tech data rights to government. The government needs to have adequate rights to support its weapons systems, including developing repair and maintenance procedures, selecting alternate repair sources, and procuring spare parts competitively. WHEN ARE TECHNICAL DATA RIGHTS ACQUIRED, AND HOW ARE THESE DATA MANAGED? Provision for acquiring weapons systems tech data rights, access, and delivery should be made early in acquisition. Tech data rights decisions made during acquisition will affect government’s ability to support weapons systems throughout increasingly longer life cycles—up to approximately 40 years. Delaying action in acquiring technical data rights can make them cost prohibitive or difficult to obtain. Technical data are jointly managed and may be stored at either the government’s or contractor’s repository. DOD Directive 5000.1 and Instruction 5000.2 provide DOD’s overall acquisition policy. The Defense acquisition guidebook and product support guide offer additional discretionary guidance. 5000.1 “The Defense Acquisition System” —Designates the program managers as the persons with responsibility and authority for accomplishing acquisition program objectives for development, production, and sustainment to meet the users’ operational needs. 5000.2 “Operation of the Defense Acquisition System” —Instructs program managers to ensure development of a flexible strategy to sustain a program so that the strategy may evolve throughout the weapons system life cycle. Discretionary guidebooks—Guide program managers to determine minimum data needs to support sustainment strategy over life cycle of system. DOD policy does not require program managers to document their strategy for ensuring long-term access to tech data over the life cycle of a weapon system. Requires program manager to develop logistics support strategy, including description of how tech data rights or long-term access to tech data is to be obtained. Navy’s acquisition policy (SECNAV Instructions 4105.1A and 5000.2c, and NAVSO P-3692) An independent team is required to assess the adequacy of the program manager’s logistics support plan, including technical data. However, there is no requirement compelling a system program manager to address how tech data rights or long-term access to technical data is to be obtained. Air Force’s acquisition policy (AF policy directives 63-1 and 20-5, and AF Instructions 63-107 and 63-101) Requires program manager to address tech data in a product support strategy. Product Data and Engineering Working Group TACOM’s draft technical data rights study Planned revisions to 5000.2c Independent logistics assessment handbook Air Force Developing an independent logistics assessment process Service officials responsible for these efforts believe their work would be facilitated by a DOD policy providing clear direction to the program managers to develop a tech data strategy early in the acquisition process. Program managers are under financial pressure to use available funds to buy more inventory or capability rather than technical data. DOD has deemphasized the requirement of acquiring rights to tech data: The “use of performance-based acquisition strategies…may obviate the need for data and/or rights.” USD (AT&L) memo on the reform of intellectual property rights of contractors, Jan. 5, 2001. “Finally, program officials should seek to establish performance-based requirements that enhance long-term competitive interests, in lieu of acquiring detailed design data and data rights.” DOD guide to negotiating intellectual property rights, Oct. 15, 2001. DOD acquisition policy (revised May 2003) eliminated requirement for program managers to ensure long-term access to technical data and required them to develop performance-based logistics strategies. DOD has been slow in responding to our recommendation to revise its technical data acquisition policy. In August 2004, GAO compared the practices of DOD with those of the private sector for acquiring new systems. We found that the private sector typically acquires the technical data for new systems, while DOD program managers often do not acquire the technical data, opting instead to buy larger quantities or greater system capability with available funding. GAO recommended that DOD consider requiring program offices to develop strategies providing for future delivery of technical data to allow selection of alternate sources or offering work for competition. DOD concurred, stating it would update its DOD 5000 regulations to include this requirement. This change was to be accomplished by the Defense Acquisition Policy Working Group (DAPWG). DOD has been delayed in updating its 5000 regulations due to other DAPWG priorities (i.e., the Quadrennial Defense Review). DOD recently reaffirmed its intent to implement our recommendation when DAPWG returns to updating the 5000 regulations in May 2006. Budget Requirements for Technical Data ($ in millions) million of increase is due to increased technical data requirements, which resulted primarily from cessation of Abrams Tank and Bradley Fighting Vehicle upgrade programs. Remaining $101 million of the increase is due to programs not involving technical data requirements, such as obsolescence management of replacement parts and digitization of maintenance technical manuals. Even within individual contracts, technical data costs may not be apparent. According to service contracting officials, some weapon systems support contracts do not separately price technical data. Examples are the Air Force’s C-130J and F117. Logistics assistance representatives’ reports indicated instances where field maintainers initially lacked technical data (e.g., repair manuals), but these cases were infrequent and subsequently resolved. Some new equipment items that were rapidly fielded into service lacked accompanying technical data. Readiness rates for equipment in Southwest Asia theaters of operations have not fallen below acceptable levels, except in four minor instances, according to recent Army data. In addition to the contact named above, Thomas Gosling, Assistant Director; Larry Junek; Andrew Marek; John Strong; Cheryl Weissman; and John Wren were major contributors to this report.
A critical element in the life cycle of a weapon system is the availability of the item's technical data--recorded information used to define a design and to produce, support, maintain, or operate the item. Because a weapon system may remain in the defense inventory for decades following initial acquisition, technical data decisions made during acquisition can have far-reaching implications over its life cycle. In August 2004, GAO recommended that the Department of Defense (DOD) consider requiring program offices to develop acquisition strategies that provide for future delivery of technical data should the need arise to select an alternative source for logistics support or to offer the work out for competition. For this review, GAO (1) evaluated how sustainment plans for Army and Air Force weapon systems had been affected by technical data rights and (2) examined requirements for obtaining technical data rights under current DOD acquisition policies. The Army and the Air Force have encountered limitations in their sustainment plans for some fielded weapon systems because they lacked needed technical data rights. The lack of technical data rights has limited the services' flexibility to make changes to sustainment plans that are aimed at achieving cost savings and meeting legislative requirements regarding depot maintenance capabilities. GAO identified seven weapon system programs that encountered such limitations--C-17, F-22, and C-130J aircraft, Up-armored High-Mobility Multipurpose Wheeled Vehicle, Stryker family of vehicles, Airborne Warning and Control System aircraft, and M4 carbine. Although the circumstances surrounding each case were unique, earlier decisions made on technical data rights during system acquisition were cited as a primary reason for the limitations subsequently encountered. As a result of the limitations encountered, the services had to alter their plans for developing maintenance capability at public depots, developing new sources of supply to increase production, or soliciting competitive offers for the acquisition of spare parts and components to reduce sustainment costs. For example, the Air Force identified a need to develop a core maintenance capability for the C-17 at government depots to ensure it had the ability to support national defense emergencies, but it lacked the requisite technical data rights. To mitigate this limitation, the Air Force is seeking to form partnerships with C-17 sub-vendors. However, according to Air Force officials, some sub-vendors have declined to provide the needed technical data needed to develop core capability. Although GAO did not assess the rationale for the decisions made on technical data rights during system acquisition, several factors, such as the extent the system incorporates technology that was not developed with government funding and the potential for changes in the technical data over the weapon system's life cycle, may complicate program managers' decisions. Current DOD acquisition policies do not specifically address long-term technical data rights for weapon system sustainment. For example, DOD's policies do not require program managers to assess long-term needs for technical data rights to support weapon systems and, correspondingly, to develop acquisition strategies that address those needs. DOD, as part of the department's acquisition reforms and performance-based strategies, has deemphasized the acquisition of technical data rights. Although GAO has recommended that DOD emphasize the need for technical data rights, DOD has not implemented these recommendations. The Army and the Air Force have recognized weaknesses in their approaches to assessing and securing technical data rights and have begun to address these weaknesses by developing more structured approaches. However, DOD acquisition policies do not facilitate these efforts. Unless DOD assesses and secures its rights for the use of technical data early in the weapon system acquisition process when it has the greatest leverage to negotiate, DOD may face later challenges in sustaining weapon systems over their life cycle.
Border Patrol is to apply consequences under CDS to all apprehended aliens, which numbered over 1.1 million along the southwest border from fiscal year 2013 through 2015. Border Patrol agents implement CDS by classifying apprehended aliens into one of seven noncriminal or criminal categories and then applying one or more of eight different consequences categorized as criminal, administrative or programmatic. Border Patrol guidance states that Border Patrol agents must apply at least one administrative consequence to every apprehended alien but may apply more than one consequence, including using a mix of administrative, criminal and programmatic consequences to a single apprehended alien. Figure 1 provides an overview of CDS alien classifications and Figure 2 provides an overview of possible consequences under CDS. To assist Border Patrol agents in selecting the most appropriate consequence, Border Patrol rank orders these consequences from Most Effective and Efficient to Least Effective and Efficient for each alien classification and presents this information in a CDS guide. Figure 3 provides an example of one sector’s CDS guide for fiscal year 2015. According to CDS guidance, Border Patrol agents are encouraged to reference their sectors’ CDS guide to select the Most Effective and Efficient consequence based upon the alien’s classification. According to CDS PMO officials, agents can use discretion in selecting the consequence or consequences they apply to an alien based upon the circumstances of the subject’s apprehension, federal partner agencies’ capacity to provide support, and the prioritization of a consequence in that sector. CDS PMO is responsible for providing guidance, training, analytical and other support to sectors for implementation of the CDS guide. See figure 4 for a map of Border Patrol’s southwest border sectors’ boundaries. CDS PMO facilitates the annual development of a CDS guide for each sector. To develop each sector’s CDS guide, CDS PMO annually surveys sector management and uses the results of these surveys to inform the ranking of consequences. CDS PMO also requires each sector to convene at least 15 field staff (such as Border Patrol agents) to assess 15 factors related to the efficiency and effectiveness of each consequence for each alien classification. These factors include performance-related factors, such as the extent to which a consequence reduces recidivism; cost-related factors, such as Border Patrol’s cost to administer the consequence; and schedule-related factors, such as the amount of time it takes Border Patrol to apply a single consequence. To facilitate the annual process, CDS PMO program staff present previous year’s data to field staff related to 12 factors—such as the sectors’ estimated cost and recidivism rate for each consequence—and direct field staff to use their professional judgement for the remaining 3 factors (the extent a consequence requires the assistance of strategic partners, is perceived as severe by apprehended aliens, and has a deterring effect on other aliens who consider crossing the border illegally). In addition to soliciting sector staff preferences, sector management complete a survey in which they are to prioritize factors regardless of the alien classification. After analyzing these results from sector staff and management, CDS PMO staff create a sector-specific guide that reflects the consequences’ ranking from Most Effective and Efficient (for the highest ranked consequence), to Highly Effective and Efficient, Effective and Efficient, Less Effective and Efficient, and Least Effective and Efficient (for the lowest ranked consequence). Most consequences available under CDS require the cooperation and resources of other federal agencies to detain, prosecute, litigate, and adjudicate removability of, or remove persons apprehended by Border Patrol, (see figure 5). DHS’s ICE oversees detention facilities for persons awaiting administrative adjudication of their removability from the United States and eligibility for any requested relief or protection from removal by DOJ’s Executive Office for Immigration Review (EOIR), and for persons awaiting ICE removal from the United States to their home country pursuant to a final order of removal. Additionally, DOJ’s USMS oversees detention for persons awaiting prosecution for criminal immigration and other offenses by DOJ’s USAO. Those convicted of a criminal immigration offense and sentenced to a term of imprisonment are incarcerated by DOJ’s Bureau of Prisons. Border Patrol uses an annual recidivism rate for the southwest border, along with other performance indicators, to monitor the effectiveness of CDS; however, weaknesses in the methodology used to calculate this rate limit its usefulness in assessing CDS. Border Patrol calculates its recidivism rate on an annual basis by dividing the total number of aliens apprehended multiple times within the fiscal year over the total number of aliens apprehended in that same fiscal year, as shown in figure 6. Border Patrol uses this rate among other performance indicators to assess the effectiveness of CDS, and DHS also reports the rate in its Annual Performance Report as one of six performance measures to assess efforts securing U.S. air, land and sea borders. In addition to using the recidivism rate to monitor performance of each Border Patrol sector, Border Patrol uses the recidivism rate to determine the effectiveness of CDS consequences and incorporates the recidivism rate into risk assessments it uses to make resource allocation decisions. However, two limitations in the rate’s methodology hinder its usefulness in providing a complete picture of CDS effectiveness. These two limitations include (1) not accounting for an alien’s apprehension history beyond one fiscal year, and (2) not excluding apprehended aliens for whom ICE has no record of removal and who may remain in the United States. Alien apprehension history over multiple fiscal years. Border Patrol’s methodology for calculating recidivism—the percent of aliens apprehended multiple times along the southwest border within a fiscal year—limits its ability to assess CDS effectiveness because this calculation does not account for an alien’s apprehension history over multiple years. We and the DHS Office of Inspector General have identified limitations with this methodology. In a 2015 report, the DHS Office of Inspector General found that Border Patrol’s recidivism rate methodology did not fully measure performance results because its recidivism rate did not reflect an alien’s re-apprehension over multiple years. Specifically, the Office of Inspector General found the methodology did not properly account for persons apprehended near the end of a fiscal year who may re-cross the border a short time later in the new fiscal year, and recommended DHS develop and implement performance measures that track alien recidivism and re-apprehension rates over multiple fiscal years. DHS concurred with this recommendation and stated that it would address it as part of its broader State of the Border Risk Methodology— a strategy to identify high-risk areas along the border and to use this information to support decisions regarding the deployment of Border Patrol resources. However, in May 2016, CBP officials told us that the State of the Border Risk Methodology incorporates the same recidivism rate methodology discussed in the DHS Inspector General’s finding and is not intended to measure or report on performance of border security efforts overall. As of September 2016, the DHS Office of Inspector General’s recommendation to track recidivism over multiple fiscal years remained open. Further, our analysis measuring recidivism on the southwest border using multiple years of Border Patrol apprehension data showed a higher recidivism rate than Border Patrol’s reported rate using one fiscal year of apprehension data. Specifically, using apprehension data we obtained from fiscal years 2013 through 2015, we found that 25 percent of aliens apprehended in fiscal year 2015 were recidivists over this time period, nearly double the Border Patrol-reported rate of 14 percent for fiscal year 2015. Additionally, while DHS reported in its Annual Performance Report for 2017 that the recidivism rate for the southwest border had decreased each year since the implementation of CDS in 2011, our analysis showed that the recidivism rate using apprehensions across multiple years had increased from 21 percent in fiscal year 2014 to 25 percent in fiscal year 2015. Apprehended aliens for whom there is no ICE record of removal from the United States. Another reason Border Patrol’s methodology for calculating recidivism limits its ability to assess CDS effectiveness is because Border Patrol’s calculation neither accounts for nor excludes apprehended aliens who may remain in the United States. According to ICE, aliens apprehended by Border Patrol may remain in the United States after their apprehension if they obtain immigration status or protection, or are awaiting the conclusion of immigration court proceedings or criminal trial, or are serving prison sentences, among other reasons. Our analysis of Border Patrol and ICE data showed that Border Patrol included tens of thousands of aliens in the total number of aliens apprehended when calculating the recidivism rate for fiscal years 2014 and 2015, for whom ICE did not have a record of removal after apprehension and who may have remained in the United States without an opportunity to recidivate. Specifically, our analysis of ICE enforcement and removal data showed that about 38 percent of the aliens Border Patrol apprehended along the southwest border in fiscal years 2014 and 2015 may have remained in the United States as of May 2016. This percentage includes 133,594 of 334,427 aliens apprehended by Border Patrol in fiscal year 2014 and 88,693 of 256,223 aliens apprehended by Border Patrol in fiscal year 2015. Our analysis measuring recidivism excluding aliens who ICE data show were not removed and may remain in the United States showed a higher recidivism rate than Border Patrol’s reported rate using all apprehended aliens regardless of removal status. Specifically, using apprehension data from fiscal year 2015 and excluding aliens Border Patrol apprehended but for whom ICE data show had not been removed from the United States, we calculated a recidivism rate of 18 percent compared to the DHS reported recidivism rate of 14 percent. Further, our analysis measuring recidivism using both an alien’s apprehension history over multiple years and excluding aliens who may remain in the United States showed an even higher recidivism rate than Border Patrol’s reported recidivism rate or either method alone. Specifically, our analysis using a three year apprehension history—from fiscal year 2013 through 2015—and excluding aliens who may remain in the United States showed a recidivism rate of 29 percent for fiscal year 2015, compared to a 14 percent recidivism rate reported by Border Patrol as shown in figure 7. CDS PMO officials stated that they include only one fiscal year of data in their recidivism rate calculation so that the agency can compare results and progress on an annual basis. However, analyzing apprehensions beyond one fiscal year to measure recidivism could provide Border Patrol with a more complete picture of CDS effectiveness and would not preclude Border Patrol from also comparing annual changes in the recidivism rate. While sector officials acknowledged that including apprehended aliens who may remain in the United States in the recidivism rate calculation is a limitation to assessing CDS effectiveness, Border Patrol headquarters officials stated that including aliens who may remain in the United States serving prison sentences in the recidivism rate is appropriate because incarceration prevents recidivism. However, the extent to which a CDS consequence resulting in incarceration prevents recidivism for an alien would not be known until the alien is returned to his or her home country. Further, due to the lack of collaboration between Border Patrol and ICE, PMO and sector officials stated that they do not have access to ICE enforcement and removal data that would allow them to determine the number of aliens apprehended by Border Patrol who may remain in the United States, including those incarcerated. Standards for Internal Control in the Federal Government states that managers need operational data to determine whether they are meeting their goals. Further, these standards state that information should be shared within the organization to ensure managers and others can effectively meet agency goals. Limitations in the methodology for calculating the recidivism rate hinder Border Patrol’s ability to assess the effectiveness of CDS over time. Strengthening the recidivism rate methodology, such as by using an alien’s apprehension history beyond one fiscal year and working with ICE to obtain access to alien case status data on removals to consider exclusion of aliens who may remain in the United States after their apprehension, would give Border Patrol a more complete assessment of recidivism along the southwest border. This in turn, would allow Border Patrol leadership to more effectively evaluate the extent to which CDS is supporting its goal of securing the border to better inform the effectiveness of CDS implementation and border security efforts. Additionally, more complete information about recidivism would help ensure that Border Patrol’s risk assessments are accurate and that the decisions made based upon these risk assessments are sound. Our analysis of Border Patrol agents’ application of the Most Effective and Efficient consequences as defined in each southwest border sector’s CDS guide showed that agents applied the Most Effective and Efficient consequence for 18 percent of the approximately 300,000 apprehensions in fiscal year 2015, a decline over the previous two years. Specifically, our analysis comparing results from fiscal year 2013 through fiscal year 2015 showed a decline in Border Patrol agents’ application of the Most Effective and Efficient consequence from 28 percent of apprehensions in fiscal year 2013 to 26 percent of apprehensions in fiscal year 2014 to 18 percent of apprehensions in fiscal year 2015. Over this three year time period, our analysis further showed that Border Patrol agents increasingly applied consequences CDS guides had identified as Highly Effective and Efficient as well as Effective and Efficient, and had decreased the application of the Less or Least Effective and Efficient consequences. Among more than 300,000 apprehensions in fiscal year 2015, Border Patrol applied a consequence CDS guides identified as Most Effective and Efficient 18 percent of the time, either Highly Effective and Efficient or Effective and Efficient 75 percent of the time, and Less- or Least- Effective and Efficient 7 percent of the time. (See figure 8 for Border Patrol’s application of Most to Least Effective and Efficient consequences for fiscal years 2013 through 2015). Further our analysis showed that Border Patrol agents varied by up to 39 percentage points in their application of the Most Effective and Efficient consequence across the nine southwest border sectors, and applied the Most Effective and Efficient consequence more often to aliens classified as criminals than noncriminals. Specifically, Border Patrol agents in the El Paso sector applied the Most Effective and Efficient consequence for 48 percent of apprehensions in fiscal year 2015—the highest percentage across the nine sectors—while Border Patrol agents in the Rio Grande Valley sector applied the Most Effective and Efficient consequence for the lowest percentage of apprehended aliens in fiscal year 2015—9 percent. Across all types of alien classifications, Border Patrol agents applied the Most Effective and Efficient consequence for 23 percent of alien apprehensions classified as criminal in fiscal year 2015—including targeted smugglers, suspected smugglers, and other criminals— compared to 17 percent of alien apprehensions categorized as noncriminal—first, second, or third time apprehensions, persistent apprehensions, and family units—with variance across sectors, as shown in figure 9. Border Patrol has not assessed reasons agent application of the Most Effective and Efficient consequence in CDS guides is relatively low, but cited various challenges including agent concerns about whether the Most Effective and Efficient consequence has a greater impact on recidivism than other consequences. While Border patrol agents use discretion when applying consequences based on their sector’s CDS guide, Border Patrol officials in one sector told us that the CDS guide did not always reflect what they believe is the Most Effective and Efficient consequence and that while the Most Effective and Efficient consequence seemed appropriate for certain alien classifications, it did not seem appropriate for other classifications. Our analysis of Border Patrol apprehension data from fiscal year 2014 through fiscal year 2015 after excluding aliens who ICE data show have not been removed and may remain in the United States, showed that while aliens classified as criminals were less likely to recidivate when Border Patrol agents applied the Most Effective and Efficient consequence, non-criminal aliens were more likely to recidivate when Border Patrol agents applied the Most Effective and Efficient consequence. Specifically, 22 percent of aliens classified as criminal who were given the Most Effective and Efficient consequence in fiscal year 2014 later recidivated in the time period fiscal year 2014 through 2015 compared to 27 percent of aliens classified as criminal and given other consequences. In contrast, 39 percent of aliens classified as non-criminal given the Most Effective and Efficient consequence in fiscal year 2014 later recidivated in the time period fiscal year 2014 through 2015 compared to 24 percent of aliens classified as non-criminal and given other consequences. Another challenge includes a concern expressed by some Border Patrol sector officials that federal partners do not have the capacity to timely and fully implement consequences identified in CDS guides as Most Effective and Efficient, which may result in apprehended aliens remaining in the United States for an indeterminate amount of time. Specifically, some Border Patrol sector officials said agents may not apply the Most Effective and Efficient consequence listed in the CDS guide if it is Warrant or Notice to Appear, since it involves ICE detention and monitoring of an alien awaiting an immigration court date. Border Patrol officials in one sector in Southern California said that ICE may have to release noncriminal aliens from detention who were given a consequence of Warrant or Notice to Appear, prior to the conclusion of their removal proceedings, because it may take up to several years for the alien’s merit hearing to occur in immigration court; and that agents are concerned that aliens released from detention will not show up for their immigration proceedings. According to EOIR, as of September 30, 2015, the number of pending cases for immigration courts in Southern California ranged from 975 cases in one court location to more than 50,000 in another location, and DOJ data show that nationally, the number of initial immigration cases EOIR completed for detained aliens decreased 55 percent from fiscal year 2011 to 2015. Our analysis of ICE case status data for fiscal years 2014 and 2015 showed that 94 percent (109,080) of the 116,409 aliens given a consequence of Warrant or Notice to Appear had an open case status and may remain in the United States, compared to 36 percent of aliens given other consequences. Further, Border Patrol sector officials also told us that Border Patrol agents in some sectors may be hesitant to apply the Most Effective and Efficient consequence if it is a criminal prosecution and therefore requires support from DOJ and the federal courts. Specifically, officials from three southwest border sectors, two of which had a relatively high number of apprehensions in fiscal year 2015, told us that the USAO districts with which their sectors are aligned are limited in the number of criminal immigration cases that they will accept from Border Patrol sectors due to capacity and resource constraints of the USMS or the U.S. Courts. For example, criminal prosecution (both standard and streamline) was the Most Effective and Efficient consequence for five different alien classifications in the CDS guide for the Rio Grande Valley sector in fiscal year 2015. Rio Grande Valley sector officials said that while agents apprehended over 129,000 aliens in fiscal year 2015, the sector can only refer about 40 immigration-related cases each day to the corresponding USAO District (Southern District of Texas) for prosecution. Once this daily limit is reached, agents must apply an alternative consequence that is not the Most Effective and Efficient as defined by the CDS guide. Officials from the USAO Southern District of Texas stated that they limit the number of cases they accept due to limitations in the capacity of the U.S. Courts to provide physical space to conduct trials. Standards for Internal Control in the Federal Government states that managers should assess the risks facing an agency from both external and internal sources and decide how to manage the risk and what actions should be taken. In addition, management should have relevant and reliable operational data to determine whether they are meeting their goals for effective and efficient use of resources. While Border Patrol officials at CDS PMO and across sectors gather perspectives on consequences from agents during the annual development of the CDS guides, Border Patrol does not routinely nor comprehensively collect information from agents on why they did not apply the Most Effective and Efficient consequence. Without this information, Border Patrol may not be able to identify and assess the appropriate risk responses for addressing agent challenges to applying the Most Effective and Efficient consequence or determine any needed modifications to the development of the CDS guides across sectors. With such an assessment, Border Patrol could determine whether actions are needed to change agents’ application of CDS guides or modify development of the CDS guides to strengthen effectiveness in reducing recidivism. CDS PMO established guidance for sectors to implement CDS, including guidance on estimating the cost of applying CDS consequences. However this guidance does not ensure Border Patrol develops valid cost estimates for CDS consequences. On an annual basis, sector personnel are to estimate the unit cost of applying each available consequence to a single noncriminal, criminal, and family-unit alien within their sector. According to CDS guidance, each sector is to report: average annual salaries for sector personnel as well as estimates of personnel time spent processing an alien; sector costs for office supplies used to process an alien, such as folders and binders; sector costs associated with facilities used for detaining an alien such as rent and electricity; sector costs for the housing and care of a detained alien, such as bedding, meals, and toiletries; and sector costs for transporting an alien. These costs are to be based on the sector’s previous year’s expenses. CDS PMO uses the cost estimates to provide data to sector personnel for 4 of the 15 factors they are to evaluate as part of the annual development of the CDS guides. These factors include: (1) the cost per apprehension by alien type; (2) the cost of the consequence per border mile where it is available; (3) the cost per hour of Border Patrol processing time; and (4) the total personnel hours to complete the consequence. Sector personnel are encouraged to review these data when determining the ranking of CDS consequences from Most Effective and Efficient to Least Effective and Efficient. However, our analysis of sector cost estimates identified errors, variations, and omissions in how sectors estimated costs, which limited the utility of the estimates in determining which consequences are Most Effective and Efficient. Since fiscal year 2013, CDS PMO has provided written guidance and workbooks to help sector staff estimate and examine cost differences among each of the CDS consequences, but these workbooks include calculation errors on housing and care costs that result in incorrect costs. Specifically, the workbooks calculate annual housing and facility costs on a per hour basis, not a per alien basis, and thus do not properly account for the volume of aliens each sector apprehends in a given year. As a result, Border Patrol staff from the San Diego sector using the workbooks estimated a cost of about $2,366 per noncriminal alien receiving a consequence of reinstatement of a removal order for fiscal year 2015. However, once we accounted for the number of aliens apprehended in the San Diego sector—more than 25,000 aliens in fiscal year 2015—we calculated a cost estimate of $282. Additionally, the housing and care cost estimates do not account for personnel time involved in housing an alien. For example, San Diego sector officials estimated that a noncriminal alien is detained for 36 hours to process a reinstatement of removal order, but estimated using six hours of personnel time, rather than 42 hours which would account for both processing and detention time. Further, the guidance does not state which costs sectors should use in their cost estimates for consequences, resulting in variation among sectors. For example, five sectors included facility costs such as electricity, gas, and rent in their cost estimates for the consequence of reinstating a removal order, while three other sectors did not include any facility costs in their estimates for the same consequence. As a result, the reported cost for this consequence for a noncriminal alien ranged from $135 in Laredo sector to more than $80,000 in Rio Grande Valley sector, see Table 1. CDS PMO officials said that since each sector develops its own CDS guide, differences in how sectors calculate facility costs may not change the relative ranking of consequences as long as each sector is consistent in applying their cost methodology across all consequences. Sector officials also acknowledged that there might be additional errors in how housing and care costs are calculated but were unsure of how the errors would affect the annual development of the guides. However, we determined that a cost estimate of $282 instead of $2,366 for a reinstatement of removal for a noncriminal class alien in the San Diego sector would change the relative ranking of this consequence from the third to fifth most costly consequence, which could affect how Border Patrol agents rank this consequence during the annual development of the CDS guides. As another example, standard prosecution was originally estimated as the least costly consequence for a criminal class alien in the El Paso sector in fiscal year 2015. However, once we accounted for the more than 13,000 aliens apprehended in this sector in 2015, we found that the standard prosecution would be the most costly consequence compared to other available consequences. CDS program officials also stated that CDS guidance does not require sector staff to include estimates of CDS implementation costs to federal partners. As a result, consequences that Border Patrol considers Most Effective and Efficient may not reflect the optimal use of resources for the federal government overall. For example, by comparing Border Patrol apprehension data to ICE case data, we found that 64 percent of Border Patrol apprehensions required at least some involvement by ICE in fiscal years 2014 and 2015 to support consequences requiring administrative detention and removal of aliens from the United States. Additionally, USMS reported that more than half of all prisoners it received in fiscal year 2015 were from five federal districts along the southwest border (Southern California, Arizona, New Mexico, Southern Texas, and Western Texas) and, by 2017, projected an increase of more than 7,000 prisoners in those districts primarily for immigration-related offenses. According to CBP, some consequences, such as criminal prosecution, require involvement and resources of up to four federal agencies. CDS program officials said that CDS guidance does not require sectors to include federal partner costs because the CDS Program was designed around the Border Patrol’s resources. However cost data are readily available for some federal partners involved in implementing CDS consequences, such as ICE and USMS, which provide detention services prior to and, as appropriate, during the pendency of, administrative hearings or criminal trials, respectively. For example, ICE reported an estimated daily housing cost of $122 per day for each alien detained in fiscal year 2015, and we estimated an average daily cost of $76 for detention services provided by USMS in fiscal year 2015 along the southwest border. Including estimation of known costs, such as these, would increase Border Patrol’s cost estimates for consequences that require detention services—such as criminal prosecution or a Warrant or Notice to Appear—and therefore could affect the rankings Border Patrol agents assign these consequences if they were to consider the effectiveness and efficiency of consequences across the federal government. Our Cost Estimating and Assessment Guide states cost estimations used to support decision-making must be logical, credible, and acceptable to a reasonable person and avoid subjective judgement on which costs to include. Additionally, the guidance states that if cost estimates are to support the comparative ranking of different alternatives, cost elements of alternatives should be estimated to make each alternative’s cost transparent in relation to the others. Border Patrol would have greater assurance that the consequences ranked as Most Effective and Efficient within the CDS guides accurately reflect cost efficiency by revising cost estimating guidance provided to sectors to more fully and reliably account for Border Patrol and partner resources, as appropriate and available. Border Patrol established performance measures in fiscal year 2013 to assess each sector’s application of the Most and the Least Effective and Efficient consequences for alien apprehensions; however, the agency does not fully monitor progress against these measures. To assess a sector’s application of the Most Effective and Efficient consequence, Border Patrol calculates the percentage of apprehensions in which agents applied the Most Effective and Efficient consequence to aliens apprehended in that sector. Border Patrol conducts the same calculation to determine a sector’s application of the Least Effective and Efficient consequence. According to CDS PMO officials, sector officials set their own performance targets for performance measures based on previous years’ trends related to the application of the Most and the Least Effective and Efficient consequences. According to Border Patrol documentation, sector officials can use these targets to increase their application of the Most Effective and Efficient consequence and to decrease their application of the Least Effective and Efficient consequence over time. Our analysis of Border Patrol data on apprehensions and CDS consequences showed that six of nine sectors missed some of their established performance targets by a range of 1 percentage point to 37 percentage points, as displayed in figure 10. Officials from three of the nine southwest sectors—Del Rio, El Centro and San Diego—reported that sector management did not monitor the extent that their agents were applying the consequences defined in CDS guides as Most or Least Effective and Efficient as of March 2016. CDS PMO officials said that while Border Patrol has a mechanism in place that sector management can use to monitor their progress in meeting performance targets, CDS PMO officials do not ensure sectors are monitoring performance or report sectors’ performance information to Border Patrol headquarters. CDS PMO officials said that they discontinued monitoring and reporting performance results in fiscal year 2016 because sectors have access to data which would allow sectors to monitor their own performance targets. Standards for Internal Control in the Federal Government states that management should monitor and assess the quality of performance over time. Additionally, these standards state that information is needed throughout an agency to achieve all its objectives. Without ensuring that Border Patrol and sector management monitor progress in meeting established performance targets and communicate CDS-related performance targets, Border Patrol does not have the information it needs to fully assess the extent to which CDS is achieving its goals of reduced recidivism and cost efficiency. Border Patrol reports that agent classification of aliens into one of the three criminal or four noncriminal classifications pursuant to CDS guidance is critical to selecting the Most Effective and Efficient consequence to deter future illegal border crossings. However, Border Patrol does not have controls in place to fully ensure that aliens are classified in accordance with CDS guidance. Border Patrol guidance to sectors provides definitions and additional details to determine the classification of each apprehension. For example, the guidance states that a first-time apprehension classification may be used on an alien that has been apprehended by another agency. Further, Border Patrol has established CDS data integrity activities at headquarters and at each sector as a control to better ensure the accuracy of data entry by Border Patrol agents and make any necessary corrections. CDS PMO officials said that they check the integrity of apprehension data for certain aspects, such as CDS consequence applied, alien nationality, and gender, to ensure quality and accuracy. According to CDS PMO officials, data integrity checks are done on a weekly basis and CDS PMO receives a quarterly report of potential errors in the data. CDS PMO then requests sector staff make corrections to the data as needed. However, our analysis of Border Patrol apprehension data for recidivists from fiscal year 2013 through 2015 showed that Border Patrol did not classify 49,128 of 434,866 (11 percent) of apprehensions in accordance with the agency’s guidance. Of these 49,128 apprehensions, 15,309 apprehensions were for aliens previously apprehended and identified as a type of criminal alien (targeted smuggler, suspected smuggler, or criminal alien) and were subsequently classified either as a noncriminal alien or different type of criminal alien, as shown in table 2. For example, 7,929 aliens apprehended from fiscal year 2013 through 2015 were classified as a Criminal Alien (an alien with previous criminal convictions) and then were later re-apprehended and classified as a Persistent Apprehension (a noncriminal class alien arrested four or more times by Border Patrol). According to Border Patrol guidance, agents should classify an apprehension as a Criminal Alien apprehension if the apprehended alien has any prior criminal convictions whereas agents should only classify an alien as a Persistent Apprehension if another classification is not appropriate. Further, our analysis showed that criminal aliens not classified in accordance with agency guidance were less likely to face prosecution and more likely to be voluntarily returned to their home country than criminal aliens overall. Specifically, of the approximate 15,000 apprehensions of criminal aliens who were not classified according to CDS guidance between fiscal years 2013 and 2015, 8 percent were recommended for criminal prosecution (3,912 apprehensions) compared to 47 percent of all criminal aliens during that timeframe. Additionally, 24 percent of criminal aliens who were not classified between fiscal years 2013 and 2015 received the Least Effective and Efficient consequence of voluntary return to their home country (3,717 apprehensions) as defined in the CDS guides compared to 9 percent of all criminal aliens classified during that timeframe. CDS PMO officials provided several reasons why agents may not consistently classify a criminal alien to include issues related to guidance, implementation, and oversight. These officials said that agents received oral direction from headquarters to reclassify criminal aliens who cannot be given a consequence of federal prosecution, and that written data integrity guidance to sectors did not include activities for checking the accuracy of alien classifications. Further, officials said that agents may not always take the time to review previous CDS classifications, and may rely on other information sources that are incomplete and change over time, such as national or local lists of aliens identified for targeted enforcement. However, our review of individual alien CDS history sometimes shows significant variance that may compromise the usefulness of the CDS program. For example, one alien apprehended 54 times in the Rio Grande Valley sector between October 2012 and May 2015 was classified as a First Time Apprehension 6 times, a Second or Third Time Apprehension 4 times, a Persistent Apprehension 22 times, a Suspected Smuggler 15 times and a Targeted Smuggler 7 times. Standards for Internal Control in the Federal Government states that accurate and timely recording of events provide relevance and value to management when controlling operations and making decisions. Without correctly classifying alien apprehensions according to its guidance, Border Patrol does not have reasonable assurance that aliens receive the most appropriate consequences and that Border Patrol is most effectively using CDS to address and reduce the threat from smuggling and other criminal activity. Border Patrol’s implementation of CDS represents a key component of DHS’s efforts to secure the southwest land border from transnational smuggling organizations and other threats. Additional actions on the part of Border Patrol could strengthen implementation and oversight of the CDS program. Specifically, measuring recidivism using an alien’s apprehension history beyond one fiscal year and adjusting for aliens with no record of removal who may remain in the United States after apprehension would give Border Patrol a more complete assessment of CDS performance, which in turn would allow Border Patrol leadership to more effectively evaluate the extent to which CDS is supporting its goal of securing the southwest border. Additionally, collecting information on reasons agents do not apply the Most Effective and Efficient consequence identified in sectors’ CDS guides could provide important information about how to increase agents’ application of these consequences or allow Border Patrol to consider how factors such as federal partners’ capacity constraints may further inform a need to modify the development process for each sector’s CDS guide. Revising guidance to sectors for estimating costs to ensure these costs are accurately calculated across consequences and inclusive of partner agencies’ costs, where appropriate and available, would also help ensure that sector staff and leadership are using valid information in determining which consequences are Most Effective and Efficient during the annual development of the CDS guides. Finally, mechanisms to monitor, manage, and communicate results of sector performance, alien classification, and data integrity efforts would provide Border Patrol with greater assurance that CDS is functioning as intended. To better inform on the effectiveness of CDS implementation and border security efforts, we recommend that the Chief of Border Patrol: strengthen the methodology for calculating recidivism such as by using an alien’s apprehension history beyond one fiscal year and excluding aliens for whom there is no record of removal and who may remain in the United States; collect information on reasons agents do not apply the CDS guides’ Most Effective and Efficient consequences to assess the extent that agents’ application of these consequences can be increased and modify development of CDS guides, as appropriate; revise CDS guidance to ensure consistent and accurate methodologies for estimating Border Patrol costs across consequences and to factor in, where appropriate and available, the relative costs of any federal partner resources necessary to implement each consequence; ensure that sector management is monitoring progress in meeting their performance targets and communicating performance results to Border Patrol headquarters management; and provide consistent guidance for alien classification and take steps to ensure CDS PMO and sector management conduct data integrity activities necessary to strengthen control over the classification of aliens. Additionally, we recommend the Secretary of Homeland Security direct the Assistant Secretary of ICE and Commissioner of CBP to collaborate on sharing immigration enforcement and removal data to help Border Patrol account for the removal status of apprehended aliens in its recidivism rate measure. We provided a draft of this report to DHS and DOJ for their review and comment. DOJ indicated that it did not have any formal comments on the draft report in a December 13, 2016 email from the department’s Audit Liaison. DHS provided written comments, which are noted below and reproduced in full in appendix III, and technical comments, which we incorporated as appropriate. DHS concurred with five of the six recommendations in the report and described actions underway or planned to address them. DHS did not concur with one recommendation in the report. With regard to the first recommendation, to strengthen its methodology for calculating recidivism such as by using an alien's apprehension history beyond one fiscal year and excluding aliens for whom there is no record of removal and who may remain in the United States, DHS did not concur. DHS noted that CDS uses annual recidivism rate calculations to measure annual change, which is not intended to be, or used, as a performance measure for CDS, and that Border Patrol annually reevaluates the CDS to ensure that the methodology for calculating recidivism provides the most effective and efficient post apprehension outcomes. DHS stated that external factors can affect the consequences available to each sector, which may change over time, and thus using the recidivism rate for multiple years would not benefit Border Patrol. Additionally, DHS noted that the support Border Patrol provides to its partners is not impacted by the aliens for whom there is no record of removal and who may remain in the United States. DHS stated that removing these individuals from the recidivism formula would not affect the consequence given to a specific alien. DHS requested that we consider this recommendation resolved and closed. We continue to believe that Border Patrol should strengthen its methodology for calculating recidivism, as DHS noted in its comments that the recidivism rate is used as a performance measure by Border Patrol and DHS. As noted in the report, strengthening the recidivism rate methodology, such as by using an alien’s apprehension history beyond one fiscal year, would not preclude its use for CDS as a measure of annual change, and would provide Border Patrol a more complete assessment of the rate of change in recidivism. Further, while Border Patrol stated that excluding individuals from the recidivism formula would not affect the consequence given to an alien, recidivism is one of the factors considered by sectors when developing its CDS guide each year, and more complete information would help ensure that Border Patrol’s risk assessments are accurate and that the decisions made based upon these risk assessments are sound. This in turn, would allow Border Patrol leadership to more effectively evaluate the extent to which CDS is supporting its goal of securing the border to better inform the effectiveness of CDS implementation and border security efforts. With regard to the second recommendation, to collect information on reasons agents do not apply the CDS guides' Most Effective and Efficient consequences to assess the extent that agents' application of these consequences can be increased and modify development and of CDS guides as appropriate, DHS concurred. DHS stated that each year CDS PMO will interview subject matter experts from each sector to discuss the situations where the Most Effective and Efficient consequence is not applied to include in the annual development of their CDS guide. DHS provided an estimated completion date of September 30, 2017. Dependent on the methodology used by the subject matter experts to collect such information needed to assess further actions to increase agent application of the Most Effective and Efficient consequence or modify CDS guides, these planned actions, if fully implemented, should address the intent of the recommendation. With regard to the third recommendation, to revise CDS guidance to ensure consistent and accurate methodologies for estimating Border Patrol costs across consequences and to factor in, where appropriate and available, the relative costs of any federal partner resources necessary to implement each consequence, DHS concurred. DHS stated that CDS PMO will add sector apprehension data to the "Cost per Apprehension" factor, to account for the volume of apprehensions each year, and will meet with sectors to assist with cost estimates prior to the development of their CDS guides. DHS provided an estimated completion date of July 31, 2017. However, DHS further stated that relative costs of its federal partner's resources are irrelevant for CDS purposes as the program is Border Patrol specific, and an attempt to associate costs to resources spent by other federal agencies would not be prudent. We continue to encourage Border Patrol consideration of available federal partner costs incurred in supporting CDS consequences. As reflected in its agency comments, DHS states Border Patrol relies on federal partners in order to apply the Most Effective and Efficient consequences, that the application of consequences requires a holistic approach, and that it cannot effectively and efficiently achieve its mission without the assistance of partnering agencies. As Border Patrol has moved away from applying the Border Patrol specific consequence of Voluntary Return to other consequences requiring support and costs incurred by federal partners, including these costs would provide greater assurance that the consequences Border Patrol ranked as Most Effective and Efficient within the CDS guides accurately reflect cost efficiency. Further, to the extent that Border Patrol accounts for available federal partner costs as appropriate, these planned actions, if fully implemented, should address the intent of the recommendation. With regard to the fourth recommendation, to ensure that sector management is monitoring progress in meeting their performance targets and communicating performance results to Border Patrol headquarters management, DHS concurred. DHS stated that CDS PMO will reinstitute quarterly sector performance progress reports that will include sectors’ classification, recidivism, average apprehension per recidivist, and displacement rates. DHS provided an estimated completion date of September 30, 2017. These planned actions, if fully implemented and communicated to Border Patrol headquarters management, should address the intent of the recommendation. With regard to the fifth recommendation, to provide consistent guidance for alien classification and take steps to ensure CDS PMO and sector management conduct data integrity activities necessary to strengthen control over the classification of aliens, DHS concurred. DHS stated that CDS PMO will work with Border Patrol’s Enforcement Systems Division to implement a program or rule within Border Patrol's system of record that will allow the processing agent and supervisor to identify the alien's previous CDS classification and to ensure accuracy and compliance. DHS provided an estimated completion date of September 30, 2017. This planned action, if fully implemented, should address the intent of the recommendation. With regard to the sixth recommendation, that the Secretary direct the Assistant Secretary of ICE and Commissioner of CBP to collaborate on sharing immigration enforcement and removal data to help Border Patrol account for the removal status of apprehended aliens in its recidivism rate measure, DHS concurred. DHS stated that collecting and analyzing ICE removal and enforcement data would not be advantageous to Border Patrol for CDS purposes since CDS is specific to Border Patrol. However, DHS also stated that CDS PMO and ICE have discussed the availability of the removal and enforcement data and ICE has agreed to provide Border Patrol with these data, if needed. DHS requested that we consider this recommendation resolved and closed. While DHS’s planned actions are a positive step toward addressing our recommendation, DHS needs to provide documentation of completion of these actions for us to consider the recommendation closed as implemented. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Attorney General of the United States, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix IV. Border Patrol collects and analyzes data on the number and classification of apprehended aliens and the Border Patrol sector in which the alien was apprehended. In addition, ICE collects and maintains data on the case status of apprehended aliens, including if and when an alien was removed from the United States. We used these Border Patrol and ICE data to calculate recidivism using Border Patrol’s methodology and also using three alternative methods. Specifically, we calculated a recidivism rate using 1) Border Patrol’s method to consider aliens’ apprehension history within the fiscal year; 2) aliens’ apprehension history over three years (fiscal years 2013 through 2015); 3) Border Patrol’s method to consider aliens’ apprehensions history only within the fiscal year after excluding aliens who ICE data indicate have not been removed and may remain in the United States; and 4) aliens’ apprehension history over three years after excluding aliens who ICE data indicate have not been removed and may remain in the United States. Figure 11 provides an overview of Border Patrol’s recidivism rate calculation as well as the three alternative methods we used to determine the extent to which Border Patrol’s measure of recidivism assesses CDS effectiveness. For each of the nine Border Patrol sectors along the southwest border and each of the seven alien classifications, tables 5 and 6 provide: the unique number of apprehended aliens, the recidivism rate based on Border Patrol’s methodology (aliens’ apprehension history within fiscal year 2015), the recidivism rate based on alien apprehension history over three years (fiscal years 2013 through 2015), the recidivism rate based on an aliens’ apprehension history over three years (fiscal years 2013 through 2015) and excluding aliens who ICE data show have not been removed and may remain in the United States, and the percentage of apprehended aliens who ICE data show have not been removed and may remain in the United States. Table 3 presents data on apprehensions, recidivism, and aliens who may remain in the United States by the sector of apprehension for fiscal year 2015. As the table illustrates, sectors varied significantly in the volume of unique aliens apprehended for fiscal year 2015, ranging from a low of less than 5,000 unique aliens apprehended in the Big Bend sector to a high of more than 113,000 unique aliens apprehended in the Rio Grande Valley sector. Using Border Patrol’s methodology considering only recidivists within the fiscal year, the San Diego sector had the highest rate of recidivism at 26 percent in fiscal year 2015. Our alternative analysis of recidivism rates using alien apprehension history over three years (fiscal years 2013 through 2015), showed that the San Diego sector had the highest rate of recidivism at 45 percent. In fiscal year 2015, the Big Bend sector had the lowest rate of recidivism considering only recidivists within the fiscal year (2 percent) and also analyzing recidivism using alien apprehension history over the 3 years (13 percent). Our analysis of recidivism rates using alien apprehension history over 3 years after excluding aliens who may remain in the United States showed that the San Diego sector had the highest recidivism rate (45 percent) in fiscal year 2015 and the Del Rio sector had the lowest recidivism rate (14 percent). The percentage of aliens apprehended by Border Patrol in fiscal year 2015 who ICE data show had not been removed and may remain in the United States as of May 2016 ranged from a high of 53 percent of apprehended aliens in the Yuma sector to a low of 22 percent of apprehended aliens in the Laredo sector. Table 4 presents apprehension, recidivism and removal data by alien classification for an alien’s most recent apprehension in fiscal year 2015. As the table illustrates, aliens classified as first-time apprehensions represented the majority of unique apprehensions, accounting for more than 141,000 unique aliens apprehended in fiscal year 2015. In contrast, targeted smuggler apprehensions were the least common type of apprehension, with about 2,100 unique aliens classified as targeted smugglers. Recidivism rates by alien classification varied across methodologies. Aliens classified as Persistent Apprehensions had the highest rate of recidivism using Border Patrol’s methodology only considering recidivists within fiscal year 2015 (15 percent). However, aliens classified as Targeted Smugglers had the highest rate of recidivism considering aliens’ apprehension history over the 3 years (73 percent) ending fiscal year 2015. In contrast, aliens classified as First-Time Apprehensions had the lowest rate of recidivism considering only recidivists within the fiscal year (one percent) and using apprehension history over three years (two percent). Further, our analysis of recidivism rates after excluding aliens who may remain in the United States and considering aliens’ apprehension history over 3 years, showed that aliens classified as Targeted Smugglers had the highest rate of recidivism (66 percent). The extent to which aliens apprehended in fiscal year 2015 may remain in the United States ranged from a high of 93 percent for aliens classified as a Family Unit Apprehension to a low of 11 percent for aliens classified as a Second-or-Third Time Apprehension. Border Patrol agents implement CDS by classifying eligible apprehended aliens into one of seven noncriminal or criminal categories based on the circumstances of their apprehension and then applying one or more of eight different criminal, administrative and programmatic consequences. To assist Border Patrol agents in selecting the most appropriate consequence, Border Patrol rank orders these consequences from Most Effective and Efficient to Least Effective and Efficient for each alien classification and presents this information in an annual CDS guide for each Border Patrol sector. Table 5 provides an overview of the frequency with which each CDS consequence was identified in CDS guides as Most Effective and Efficient for all nine Southwest Border Patrol sectors across all alien classifications and fiscal years 2013 through 2015. To different extents depending on the sector and year, seven of eight consequences were identified as Most Effective and Efficient for one or more types of alien populations. The eighth consequence—Voluntary Return—was never identified as a Most Effective and Efficient consequence from fiscal year 2013 through fiscal year 2015 and was identified as the Least Effective and Efficient consequence across all sectors for all noncriminal classifications during fiscal years 2013 through 2015. Among the three categories of consequences—Administrative, Criminal, and Programmatic—Administrative consequences were most frequently identified (60 percent) as Most Effective and Efficient in CDS guides, followed by criminal consequences (37 percent), and the programmatic consequence (3 percent). Among the eight consequences within these three categories, Warrant or Notice to Appear was most frequently identified as Most Effective and Efficient (36 percent) followed by Standard Prosecution (26 percent). Excluding Voluntary Return, the extent to which the remaining five consequences were identified as the Most Effective and Efficient ranged from 2 percent to 13 percent. Over these years, more sector CDS guides moved toward identifying Standard Prosecution as Most Effective and Efficient, and moved away from the administrative consequence of Warrant or Notice to Appear, as shown in table 6. Figures 12 and 13 show the Most Effective and Efficient consequences identified in each southwest Border Patrol sector’s CDS guide for fiscal years 2013 through 2015 by types of noncriminal aliens and criminal aliens, respectively. For alien family apprehensions, CDS guides consistently identified only administrative consequences as Most Effective and Efficient, primarily Warrant or Notice to Appear. In addition to the contact named above, Lacinda C. Ayers (Assistant Director), Giselle Cubillos-Moraga, Kathleen Donovan, Cynthia Grant, Michele Fejfar, Eric Hauswirth, Susan Hsu, DuEwa Kamara, Jon Najmi, Christine San, and Mike Tropauer made key contributions to this report.
To address smuggling along the U.S. southwest border, the U.S. Border Patrol developed CDS—a process to classify each apprehended alien into criminal or noncriminal categories and apply consequences, such as federal prosecution. Each Border Patrol sector ranks up to eight consequences from Most to Least Effective and Efficient to reduce recidivism. GAO was asked to review and assess Border Patrol's implementation of CDS across the southwest border. This report examines the extent to which Border Patrol (1) has a methodology for calculating recidivism that allows it to assess CDS program effectiveness, (2) applied consequences it determined to be most effective and efficient in each southwest border sector and (3) established guidance and controls to monitor field implementation of CDS. GAO analyzed Border Patrol's recidivism rate methodology; apprehension data and CDS application along the southwest border for fiscal years 2013 through 2015, the most recently available data; and DHS's U.S. Immigration and Customs Enforcement data on alien removals. GAO also interviewed Border Patrol personnel and reviewed CDS guidance. The U.S. Border Patrol (Border Patrol), an office within the Department of Homeland Security's (DHS) U.S. Customs and Border Protection, uses an annual recidivism rate to measure performance of the Consequence Delivery System (CDS)—a process that identifies consequences as Most Effective and Efficient to deter illegal cross border activity in each sector—however, methodological weaknesses limit the rate's usefulness for assessing CDS effectiveness. GAO found that Border Patrol's methodology does not account for an alien's apprehension history beyond one fiscal year and neither accounts for nor excludes apprehended aliens for whom there is no record of removal after apprehension and who may have remained in the United States without an opportunity to recidivate. GAO's analysis of recidivism for fiscal year 2015 considering these factors showed a 29 percent recidivism rate, compared to Border Patrol's 14 percent recidivism rate. Border Patrol could more accurately assess recidivism and CDS effectiveness by strengthening its recidivism rate methodology, such as by using an alien's apprehension history beyond one fiscal year and excluding aliens for whom there is no record of removal from the United States. Agent application of consequences Border Patrol identified in CDS guidance as the Most Effective and Efficient has declined from 28 percent in fiscal year 2013 to 18 percent in fiscal year 2015 across the southwest border. In addition, Border Patrol has not assessed reasons for the relatively low application of consequences determined to be the Most Effective and Efficient consequence in each sector; but some agency officials stated that challenges include agents' hesitation to apply consequences that require referral to federal partners facing capacity constraints, such as Department of Justice immigration courts. Assessing why agents do not apply the Most Effective and Efficient consequence could inform Border Patrol of actions needed to increase application of Most Effective and Efficient consequences to reduce recidivism. Border Patrol established some mechanisms to facilitate monitoring field implementation of CDS, but lacked controls to ensure effective performance management. For example, six of nine field locations missed performance targets for application of the Most Effective and Efficient consequences in fiscal year 2015 as shown in the figure below. Ensuring consistent oversight of performance management would provide greater assurance that Border Patrol is most effectively using CDS to address cross-border illegal activity. GAO is making six recommendations to strengthen the methodology for measuring recidivism, assess reasons agents do not apply CDS guides' Most Effective and Efficient consequence, and ensure performance management oversight. DHS concurred with all but one recommendation, which relates to strengthening its recidivism methodology, citing other means to measure CDS performance. GAO believes the recommendation remains valid, as discussed in the report.
Human immunodeficiency virus/acquired immune deficiency syndrome (HIV/AIDS), tuberculosis (TB), and malaria, are devastating millions of individuals and families, thousands of communities, and dozens of nations around the world according to the UN’s WHO. HIV/AIDS, the retrovirus that causes AIDS, is usually transmitted (1) sexually; (2) from mothers to children before or at birth or through breastfeeding; or (3) through contact with contaminated blood, such as through the use of contaminated hypodermic needles. In 2004, it led to between 2.8 and 3.5 million deaths, most of them in sub-Saharan Africa, which is home to more than 60 percent of people living with the virus. The number of people infected with HIV has risen in every region of the world, with the steepest increases occurring in East Asia, Eastern Europe, and Central Asia. In China, HIV/AIDS is now found in all 31 provinces, autonomous regions, and municipalities; and in India, as of 2003, 2.5 to 8.5 million people had been infected. In Eastern Europe and Central Asia, the number of HIV-positive people has risen ninefold in less than 10 years. TB, a bacterial infection transmitted by inhalation of airborne organisms, ranks just behind HIV/AIDS as the leading infectious cause of adult mortality, each year killing up to 2 million people, mostly between the ages of 15 and 54 years. It is the most common killer of people whose immune systems are compromised by HIV. Malaria, caused by a parasite, is transmitted in human populations through the bite of infected mosquitoes. The disease kills more than one million people per year, mostly young African children. The Global Fund, established as a private foundation in Switzerland in 2002, was created as a partnership between governments, civil society, the private sector and affected communities to increase resources to fight the three diseases. As shown in table 1, 45 percent of the Global Fund’s 271 grants, as of April 15, 2005, were focused on HIV/AIDS; 45 percent went to recipients in sub-Saharan Africa; and 59 percent went to government recipients. In March 2005, the Global Fund reported that across all grants, it had provided antiretroviral treatment to 130,000 people with AIDS; tested more than one million people voluntarily for HIV; supported 385,000 TB patients with directly observed short-course given more than 300,000 people new, more effective treatments for malaria; and supplied more than 1.35 million families with insecticide-treated mosquito nets. The Global Fund’s key principles are to (1) operate as a financial instrument, not an implementing entity; (2) make available and leverage additional resources; (3) support programs that evolve from national plans and priorities; (4) operate in a balanced manner with respect to geographic regions, diseases, and health-care interventions; (5) pursue an integrated and balanced approach to prevention, treatment, care and support; (6) evaluate proposals through an independent review process; and (7) operate in a transparent and accountable manner and employ a simplified, rapid, and innovative grant-making process. Numerous entities participate in the Global Fund’s processes for managing grants. The Global Fund manages its grants in two phases, generally over a 5-year period. During phase 1, the Global Fund signs a 2-year grant agreement with the principal recipient and periodically reviews recipients’ performance to determine whether to disburse additional funds. Near the end of phase 1, the board reviews the grant’s progress to determine whether to renew the grant for an additional 3 years; if the board approves continued funding, the grant enters phase 2. The Global Fund board approved the first round of grants in April 2002 and approved 33 grants to enter phase 2 as of April 25, 2005. The following entities participate in the Global Fund’s grant management process (see fig. 1). A country coordinating mechanism (CCM) representing country-level stakeholders submits grant proposals to the Global Fund and nominates a principal recipient to be responsible for implementing the grant. According to the Global Fund, the CCM should be made up of high-level host government representatives, representatives of nongovernmental organizations (NGO), multilateral and bilateral donors, the private sector, and individuals living with HIV/AIDS, TB, or malaria. CCMs are to develop and forward grant proposals to the Global Fund, monitor grant implementation, and advise the Global Fund on the viability of grants for continued funding after 2 years. The principal recipient is a local entity nominated by the CCM that signs an agreement with the Global Fund to implement a grant in a recipient country. There may be multiple public and private principal recipients for a single grant. The principal recipient is responsible for overseeing the activities of any subrecipients implementing grant activities and for distributing grant money to them. The secretariat is responsible for the Global Fund’s day-to-day operations, including managing the grant proposal process; overseeing and managing grant implementation to ensure financial and programmatic accountability; and acting as a liaison between grant recipients and bilateral, multilateral, and nongovernmental partners to ensure that activities at the country level receive necessary technical assistance and are well coordinated. As of April 15, 2005, the secretariat had 165 staff. Within the secretariat, the fund portfolio manager, or grant manager, is responsible for reviewing grant progress and deciding whether to disburse additional funds to the principal recipient. The secretariat reports to the Global Fund’s board of directors. The 23-member board is responsible for overall governance of the Global Fund and approval of grants. The board includes 19 voting representatives of donor and recipient governments, NGOs, the private sector (including businesses and foundations), and affected communities. Key international development partners, including WHO, the Joint UN Programme on HIV/AIDS (UNAIDS), and the World Bank, participate as nonvoting members. The World Bank also serves as the Global Fund's trustee. The local fund agent is the Global Fund’s representative in each recipient country and is responsible for financial and program oversight of grant recipients. This oversight role includes an assessment of recipients prior to their receiving money from the Global Fund. To date, the Global Fund has contracted with the following entities to serve as local fund agents: four private firms, KPMG, PricewaterhouseCoopers (PWC), Chemonics International, Inc., and Deloitte Emerging Markets; one private foundation that was formerly a public corporation, Crown Agents; the Swiss Tropical Institute; and two multilateral entities, the World Bank and the UN Office for Project Services (UNOPS). PWC and KPMG serve as the local fund agents in 91 of the 110 countries for which the Global Fund has contracted local fund agents. After the board approves a proposal submitted by a CCM and vetted by an independent, multinational technical review panel, typically for a 5-year grant, the secretariat signs a 2-year grant agreement with the principal recipient. This initial 2-year period represents phase 1 of the grant; if the board approves continued funding, the grant enters phase 2. For grants approved in 2002 and 2003, the local fund agent conducted, or contracted with other entities to conduct, assessments of the recipient’s capacity to (1) manage, evaluate, and report on program activities; (2) manage and account for funds, including disbursing to subrecipients; and (3) procure goods and services and maintain a reliable supply of drugs and other commodities. The local fund agent initially conducted these assessments after the signing of the grant agreement but now conducts them before the Global Fund signs an agreement with a principal recipient. After the local fund agent determines that the results of its assessments are satisfactory, the Global Fund instructs the World Bank to disburse the first tranche of funds to the principal recipient. According to its policy, the Global Fund disburses subsequent tranches based on performance to ensure that investments are made where impact in alleviating the burden of HIV/AIDS, TB, and malaria can be achieved. During the grant period, the portfolio managers are to link disbursements to periodic demonstrations of program progress and financial accountability. The grant agreements initially specified that principal recipients would report their progress and request additional disbursements on a quarterly basis. In July 2004, the secretariat changed the default reporting/disbursement request cycle to every 6 months. As of April 15, 2005, about 20 percent of the Global Fund’s grants were on a 6-month schedule. According to the secretariat, some grant recipients may choose to remain on a quarterly schedule or the secretariat may decide, based on a grant’s risk profile, to disburse only one quarter at a time. According to secretariat officials, grant managers use four sources of information to determine whether to disburse additional funds to grant recipients. Recipient progress reports. Principal recipients submit progress reports on meeting designated targets along with requests for further funding at the end of each disbursement period. If program results or expenses differ significantly from plans attached to the grant agreement, the principal recipient is to explain the reasons for these deviations and may also provide an overview of other program results achieved, potential issues and lessons learned, as well as any planned changes in the program and budget. The recipient forwards its progress report and disbursement request to the Global Fund secretariat through the local fund agent. Recipient expenditure data. The progress reports contain cash-flow information. Principal recipients are to outline expenditures for the previous disbursement period, comparing amounts budgeted for grant activities with amounts spent. Recipients are then to reconcile expenditures and provide a current cash balance. Budgets may vary from initial projections, owing to cost savings, additional expenditures, or currency fluctuations. Local fund agent assessments. The local fund agent reviews and validates the information in the progress update, performs ad-hoc verifications of program performance and financial accountability, and advises the Global Fund on the next disbursement. Local fund agents are to highlight achievements and potential problems to support their advice and may identify performance gaps to be addressed. Representatives of one local fund agent, which covers grants in 29 countries, said that they base their disbursement recommendations on two considerations: (1) the Global Fund’s level of risk in making additional disbursements to a recipient that uses funds ineffectively and (2) the immediate effect of withholding disbursement on program implementation, including the delivery of disease-mitigating services. These representatives said that, overall, they strive to tie the progress update to projected results in the grant agreement. Contextual information. The secretariat also uses additional information relevant to interpreting grant progress, such as news of civil unrest, political disturbance, allegations of corruption, conflict, major currency crisis, change of principal recipient, and natural disasters. A secretariat official said that the secretariat did not document requirements for such information for phase-1 decisions but did allow grant managers to consider any information that would adversely affect grant implementation in their decisions to disburse. This information is typically obtained through informal communications with grant recipients, bilateral and multilateral donors, or other development partners, according to secretariat officials. The principal recipient is to provide the CCM with copies of its disbursement requests and progress reports, and CCM members may comment on the progress of implementation based on their local knowledge and experience, either through the local fund agent or directly to the secretariat. If the secretariat decides to approve the disbursement request, it may specify the level of disbursement and actions that the principal recipient must take. The secretariat then instructs the World Bank to make the disbursement. The secretariat may also decide not to approve the disbursement request. When a grant reaches its sixteenth month, the Global Fund invites the CCM to submit a request for continued funding for the period following the initial 2 years. The Global Fund refers to this period as phase 2 of the grant (see fig. 2). The CCM is to submit its request to the Global Fund by month 18, and the secretariat is to evaluate the CCM’s request using the four sources of information described earlier. Based on its assessment of this information, as informed by its professional judgment, the secretariat gives the grant one of four scores, as shown in figure 3. It then provides its assessment and recommendation—called a grant scorecard—to the board regarding approval of the request, and the board decides on the request by month 20. If the board approves the request, the principal recipient and the Global Fund negotiate and sign a grant agreement extension over the next 2 months. At month 22, the Global Fund instructs the World Bank to make the first phase-2 disbursement. The secretariat sends its recommended scores to the board members, who vote on the recommendations via e-mail. A “go” decision means that the Global Fund approves proceeding to phase 2. “Conditional go” means that the Global Fund approves proceeding to phase 2 after the principal recipient undertakes specific actions within the time frame specified. “Revised go” means that the principal recipient must reprogram the grant and substantially revise the targets and budgets for phase 2. “No go” means that the Global Fund does not approve the grant’s proceeding to phase 2. Currently, recipients denied further funding (“no go”) cannot formally appeal the board’s decision. However, a board subcommittee may consider a formal appeal process. If the “no go” decision affects patients on lifelong treatment, the principal recipient may be eligible to receive funding to sustain treatment for 2 more years. As figure 4 shows, the Global Fund board approved the first round of grant proposals in April 2002. The second, third, and fourth rounds were approved in January 2003, October 2003, and June 2004, respectively. The board is expected to approve a fifth round of proposals in September 2005. As of April 25, 2005, the secretariat had reviewed 36 grants that became eligible for continued funding under phase 2. The board approved 20 grants, conditionally approved 13, denied 1, and is still considering 2. The board will continue to evaluate grants for phase 2 on a rolling basis as they become eligible. According to Global Fund officials and other knowledgeable entities, recipient countries’ capacity to implement grants was an underlying factor in grant performance. In addition, principal recipients for the 38 grants as well as Global Fund and development partner officials frequently cited four factors associated with challenges or successes in grant performance: (1) guidance, (2) coordination, (3) planning, and (4) contracting and procurement. We found no significant association between the type of principal recipient, grant size, or disease targeted and the percentage of a grant’s funds disbursed to the principal recipient. Global Fund and development partner officials cited limited capacity in recipient countries as an underlying factor that can negatively affect grant performance. Global Fund grant managers said that in many cases, grants experienced early delays because of weaknesses in recipients’ financial, procurement, and monitoring and evaluation systems. For example, Indonesia’s local fund agent found the principal recipient’s management and financial plans to be insufficient and asked the recipient to rework them seven times before the local fund agent recommended grant disbursements. Also in Indonesia, TB spending increased fivefold, greatly straining capacity, particularly for monitoring and evaluating activities at the district level. In Kenya, a lack of designated, adequately trained staff at the principal recipient (the ministry of finance) and immediate subrecipients (the ministry of health and the National AIDS Control Council) slowed disbursements from the principal recipient and from the immediate subrecipients to implementing organizations. Ethiopia has been slow in implementing its first three grants, particularly those for TB and malaria, owing to lack of monitoring and reporting capacity within the ministry of health, delays in recruiting staff to manage financial systems, slow decision-making processes, delays in starting the procurement process, and cumbersome procurement procedures. Despite limited overall capacity in recipient countries, we found instances where recipient governments had worked with development partners to strengthen capacity in the health sector, thus facilitating grant performance. For example, according to the Indonesian government and WHO officials, the Global Fund grant in Indonesia is building on a strong foundation, using the country’s 5-year strategic plan for TB, a joint effort of the Indonesian government and WHO. Between 2000 and 2003, the Dutch government helped train TB “soldiers” in Indonesian provinces, which improved outreach and case-detection efforts under the Global Fund grant; Indonesia’s ministry of health had already established mechanisms to quickly disburse funds to districts. In addition, the Zambian government has worked with donors and other development partners to strengthen its health sector financing mechanisms. As a result, donors, including the Global Fund, contribute directly to an existing mechanism, the pooled health sector “basket,” and use the health sector donor group overseeing these funds to monitor and evaluate grant progress in meeting targets. Zambia’s health sector also has mechanisms in place to quickly channel funds to the country’s more than 70 districts. In Mongolia, the local fund agent reported that the principal recipient and subrecipients had adequate financial management systems in place to account for funds and that the principal recipient could immediately start implementing the program with little, if any, technical assistance. Some countries and grant recipients are also seeking to strengthen their capacity through Global Fund grants, according to the Global Fund and principal recipients. Grant recipients frequently reported that a lack of guidance from the recipient country’s government or the Global Fund caused them to fall short of grant targets. For example, recipients in three countries reported that they could not meet their targets because they had not received approved national treatment guidelines. Indonesia’s ministry of health did not have guidelines ready for the voluntary counseling and testing component of its HIV grant, delaying distribution of information to the provinces. Senegal’s ministry of health, another principal recipient, did not have treatment plans needed for implementing its malaria grant, preventing the principal recipient from receiving antimalarial medication. In addition, some stakeholders reported that guidance from the Global Fund was lacking or unclear or that they encountered difficulties with Global Fund grant policies. For example, in at least one instance, U.S. government officials reported that spending delays in Kenya resulted from unclear guidance from the Global Fund regarding altering programs to allow the use of newer, more effective but expensive malaria drugs. The Global Fund recognized that procedures for early grants were unclear and that this lack of clarity caused program delays. Further, WHO officials and at least one recipient voiced concerns over lack of flexibility when recipients sought to modify grant activities. For example, one subrecipient in Thailand expressed concern that it could not use Global Fund money to build or maintain a shelter for HIV-positive women because this type of activity was not written into the grant. Grant recipients also said that continued staff turnover in the Global Fund’s grant management teams made it difficult to receive clear, consistent guidance. For example, recipients in Thailand said that they had worked with four different grant managers over the life of their grants and that this turnover had complicated communication. However, several grant recipients reported that, under certain circumstances, Global Fund guidance allowed them to quickly redirect funds to meet existing targets. For example, the principal recipient in Indonesia cited grant flexibility as a factor positively affecting performance in both its TB and HIV/AIDS round-1 grants, because this flexibility allowed it to adjust its funding priorities in line with its targets. Similarly, in Thailand, one subrecipient stated that the Global Fund allowed it to change training modules to meet educational needs, contributing to success. The Global Fund secretariat reported that, in some cases, poor coordination negatively affected grant implementation. For example, in Ghana, internal rivalries between ministry of health units with different responsibilities in the program are slowing implementation. In Senegal, the Global Fund reported that the principal recipient did not meet its target for coordinating and developing partnerships to promote community-based programs for combating malaria. However, effective coordination between grant recipients and local community groups or development partners sometimes contributed to recipients’ meeting or exceeding their goals. Zambia’s HIV/AIDS grant exceeded its targets for training and provision of services because of development partner, NGO, and private sector contributions. Similarly, in Kenya, one NGO principal recipient leveraged the activities of other groups providing HIV care kits. Another recipient in Kenya exceeded its targets for condom distribution by working with local intermediaries to increase demand by approaching new types of clients, such as shoe shiners, open vehicle cleaners, security officers, staff at petrol stations, and young men at salons. In Indonesia, a grant subrecipient was able to provide treatment to a larger number of TB patients by partnering with private physicians, because a significant number of patients sought treatment at private clinics. Planning difficulties affected some recipients’ ability to meet grant targets. Recipients reported that they sometimes did not achieve targets for a variety of reasons, including not budgeting sufficient time or money to complete targets, or scheduling activities for the wrong time period. One recipient in Zambia underestimated the time needed to analyze baseline data on constituent needs prior to the planned distribution of educational materials on malaria prevention to 1,000 households. The recipient eventually printed the materials but did not reach the targeted households within the planned time frame. In Sri Lanka, a malaria grant recipient underestimated the cost of establishing a community center and had to redesign its program plan to remain within the grant budget. According to the progress report, the principal recipient established new targets to use the funds originally budgeted to build the center, delaying grant implementation. Further, a recipient in Kenya did not conduct 3,000 planned community education skits aimed at preventing HIV during one disbursement period and attributed the shortfall to a conflict with annual school examinations. The Global Fund recognized that recipients’ difficulty in setting targets for the initial grants derived in part from the fact that it was developing procedures and guidelines at the same time that it was approving and signing round-1 and round-2 grants. Conversely, in some cases, adept planning positively affected grant performance. In Indonesia, several grant recipients reported that effective planning for TB treatment allowed various districts to complete work plans early in the grant, in turn allowing the provinces that oversee those districts to meet their target of developing budgets on time. In Haiti, one principal recipient exceeded its targets by planning activities around World AIDS Day, increasing the demand for, and the principal recipient’s provision of, AIDS-related services such as condom distribution. Recipients frequently reported that contracting delays with subrecipients, vendors, or other service providers caused them to miss quarterly targets. For example, UNDP, a principal recipient in Haiti, was unable to hold a planned HIV conference because of delays in signing a contract with a subrecipient. Delays in selecting and reaching contracts with subrecipients caused the Argentine grant to start slowly, the Global Fund secretariat reported. In Thailand, the ministry of public health recipient could not establish TB treatment services because of a subrecipient delay in selecting a site and contract. Grant recipients and the Global Fund secretariat also cited procurement delays as reasons for missing quarterly targets. For example, recipients of malaria grants in Tanzania and Zambia reported that they did not distribute the targeted number of bed nets due to lengthy government procurement processes. In addition, during our visit to Zambia, we found that local spending restrictions also affected recipients’ ability to meet and report on targets. A district health director explained that spending restrictions delayed her purchase of a new hard drive for her office’s computer, which slowed the district’s grant activities and reports to the principal recipient. In Kenya, we found that the limited capacity of the Kenyan health ministry’s procurement agency and the ministry’s reluctance to contract with outside procurement experts led to delays and, as a result, to gaps in the supply of HIV test kits, which bilateral donors had to fill. In Ghana, according to the Global Fund secretariat, the government’s slow, bureaucratic procurement processes caused delays that contributed to the grant’s poor performance in reaching people with HIV/AIDS and opportunistic infections. However, the Global Fund secretariat reported that some principal recipients’ efficient procurement helped them meet their targets. For example, a principal recipient in Madagascar managed procurement exceptionally well throughout the grant and, as a result, exceeded its targets for distributing bed nets. The Global Fund disbursed this grant’s phase-2 funding early, because the recipient had implemented the program rapidly and was therefore able to use the additional funds. Another recipient in Madagascar consistently met targets, and its disbursements to subrecipients accelerated. The Global Fund also reported that, after initially strengthening its capacity, the principal recipient in Moldova made substantial progress with procurement activities, thereby lowering treatment costs per patient and realizing significant savings due to lower acquisition costs. To determine whether certain grant characteristics were factors associated with the percentage of funds disbursed, we analyzed 130 grants with first disbursements on or before December 31, 2003. We found no significant association between the type of principal recipient, grant size, or disease targeted and the percentage of a grant’s funds disbursed, after taking into account the time elapsed since the first disbursement. (See app. II for details of our analysis.) For example, the Global Fund disbursed a smaller percentage of grants to government recipients than to recipients in the private sector and faith-based organizations, but these differences do not incorporate other factors such as grant size or time elapsed since the first disbursement. We also considered whether disbursements were made in a timely manner, that is, within 135 days (a 90-day quarter plus a 45-day grace period allowed by the Global Fund for reporting). Overall, we found that 35 percent of the disbursements were made within 135 days and that later disbursements were more timely than earlier ones. The number of timely disbursements was too small at any given disbursement stage to determine whether timeliness varied according to recipient or disease type, or grant size. We noted problems associated with the four information sources that the secretariat draws on for periodic disbursement and renewal decisions. In addition, although the Global Fund’s stated policy is to disburse funds based on performance and to operate in a transparent and accountable manner, we found that the secretariat did not document its reasons for periodic disbursement decisions during phase 1. Similarly, some of the secretariat’s recommendations regarding grant renewals for phase 2 have not been fully documented, and stakeholders have raised additional concerns regarding the timing of the phase-2 renewal process, dated information, low grant expenditure, and potential politicization of disbursement decisions. We found the following problems associated with the sources of information that the secretariat uses in making periodic disbursement decisions during phase 1 and determining whether to renew grants during phase 2. Recipient progress reports vary in quality. Some reports do not explain why recipients missed targets, and the limited monitoring and evaluation capabilities of many recipients raise questions about the accuracy of their reporting. Secretariat officials acknowledged that guidance for planning program activities, setting indicators, and monitoring and evaluating progress was not available when initial grants were signed. However, Global Fund secretariat and other officials have raised questions about the ability of principal recipients to discharge their responsibility for reviewing and monitoring the activities of subrecipients to which they disburse funds. According to the Global Fund official in charge of grant operations, many early grant proposals were overly ambitious and hurriedly assembled; he said more recent proposals were more realistic and better designed. UNAIDS officials also stated that when principal recipients’ progress updates show poor performance, it is not always clear whether grants are underperforming or recipients are failing to effectively report performance. For example, when a progress update shows failure to achieve targets, the principal recipient and subrecipients may have actually completed the activities but not understood how to record them. Recipient expenditure data are incomplete. Recipients’ cash-flow reports do not include data on expenditures below the level of the principal recipient. In addition, principal recipients may not always document their disbursal of money to subrecipients. Moreover, Global Fund and other officials have questioned whether some principal recipients have the expertise needed to monitor subrecipients’ expenditures. Further, secretariat officials stated that although the achievement of program targets and cash flow are closely linked, recipients’ expenditures do not necessarily indicate that they are meeting their targets. The officials stated that utilizing this source of information is essential to guard against treatment interruptions or irreparable harm to struggling programs that are not yet viable but show strong potential. Local fund agent assessments are inconsistent. According to Global Fund secretariat officials and others, the ability of local fund agents to effectively verify program activities varies widely. A secretariat– commissioned assessment reported that the current local fund agent system does not provide grant managers with a sufficient level of risk assurance for continued funding. The study, as well as Global Fund and development partner officials, reported that although most local fund agents are competent to assess and verify financial accountability, they often lack the knowledge and experience needed to assess and verify recipients’ performance—specifically, recipients’ ability to meet program targets, monitor and evaluate progress, and procure and manage drugs and other medical supplies. The study also stated that local fund agents’ assessments of financial and program-related capacity and verifications of activities are limited and rarely include site visits to implementing subrecipients. Contextual information is systematically collected for phase 2 but not for phase 1. To better understand why recipients received phase-1 disbursements when they did not meet many of their performance targets, we requested full disbursement dossiers from the secretariat; however, the dossiers contained very little contextual information supporting the disbursement decisions. The contextual information provided was often in the form of hand-written notes or e-mail correspondence that had been collected ad hoc. Secretariat officials acknowledged that while they collect contextual information through detailed questions on the scorecards for phase-2 decisions, they have no systematic method for collecting such information for phase-1 decisions. Although the Global Fund considers contextual information in its funding decisions, it does not document the extent to which it uses such information. Although the files for the 38 grants we reviewed contained information on progress toward targets and cash flow, they contained little or no documentation explaining why the Global Fund approved the disbursements. Overall, for the 38 grants we reviewed, we determined that recipients met, on average, 50 percent of their targets; partially met 21 percent; and failed to meet 24 percent. For 6 percent of the targets, the information in the progress reports was insufficient to determine whether the target had been met, partially met, or not met. In some of these cases, the Global Fund disbursed funds to recipients even though they reported that they had met few or none of their targets. For example: The principal recipient for Sri Lanka’s second malaria grant received disbursements for its third and fourth quarters, although it had submitted two progress updates showing that it met only 2 of its 14 targets for the third quarter and 4 of 13 targets for the fourth quarter. The secretariat provided no written information explaining its approval of the third-quarter disbursement and provided only a one-sentence declaration of agreement regarding the fourth-quarter disbursement. In both cases, the local fund agent had recommended that the recipient receive less than the amount requested, citing cash-flow considerations but not mentioning performance against targets. In each case, the secretariat disbursed the amount that the local fund agent recommended. The principal recipient for Thailand’s TB grant received its second disbursement although it had met only 1 of 29 performance targets. The secretariat approved the full amount requested, stating that the recipient had not requested sufficient funds in its previous disbursement request, although the grant manager did not provide documentation to validate this assessment. The local fund agent had noted the grant’s poor performance and, acknowledging the grant’s low cash reserves, suggested a disbursement of 25 percent of the recipient’s request. Further, the Global Fund secretariat does not systematically track denied disbursement requests or publicly document denials. Secretariat officials acknowledged that they currently have no mechanism for tracking or documenting these instances. According to these officials, the denial may eventually be documented in a memorandum on the grant’s disbursement request history once a disbursement is approved or, if the grant is ultimately canceled without further disbursement, in a grant-closing memorandum. According to grant management officials, the secretariat is to unequivocally demonstrate satisfactory performance of all grants recommended to the board for continued funding under phase 2. However, we found that the secretariat did not always clearly explain the overall score it assigned each grant when it recommended the grant for continued or conditional funding. Although a substantial part of the score is to be based on recipients’ performance against agreed-on targets (e.g., the number of people to be reached by disease mitigation services), the final score can also reflect grant managers’ professional judgment, contextual information from multilateral and bilateral donors, and past disbursement rate data. Secretariat officials said that decisions based on these information sources should be documented when an overall score does not seem to reflect recipients’ achievement of individual targets. However, we did not find such documentation in the grant scorecards for 8 of 25 early grants that the Global Fund has considered for continued funding after an initial 2-year period. The secretariat gave 3 of the grants an overall score of B2 yet recommended “conditional go,” which corresponds to a B1 score. For another grant, the secretariat gave a B1 score for three indicators, two of which concern the number of people reached by treatment, care, or other disease mitigation services, yet made an overall recommendation of “go,” which corresponds to an A score. Such discrepancies between scores and recommendations are significant, because the recommendations determine the levels of action that recipients are to undertake before receiving phase-2 funding. Seven of the scorecards also raised concerns about the quality of recipients’ data and their monitoring and evaluation capabilities. Of the 25 grants, the Global Fund decided to cancel one, and the secretariat’s scorecard clearly explained the reasons for recommending that the board cancel the grant. According to the Global Fund, the phase-2 renewal process is a critical checkpoint to ensure that grants show results and financial accountability. However, some stakeholders raised concerns about the process that the Global Fund used to review the first set of grants eligible for renewal. For example, a representative of a local fund agent stated that this process may occur too early in the life of a grant and that progress may be better evaluated when a grant approaches the 3-year mark. Further, officials representing a Global Fund board member stated that data provided to the board during the first round of renewal decisions did not contain expenditure data. These officials stated that when they sought expenditure data (i.e., amounts spent by grant recipients on program activities) on the Global Fund’s Web site in March 2005, the most recent information for their grants of concern had been posted in June 2004. Subsequent data submitted to the board for phase-2 renewal decisions contained expenditure information. In one case, a recipient applying for phase-2 funding and recommended by the secretariat for continued funding had received more than 75 percent of its 2-year grant amount yet had transferred only 12 percent of this money to subrecipients for program activities. These officials also raised concerns over the potential for the politicization of board decisions because the board had returned three “no go” recommendations to the secretariat for further consideration after some recipients and NGOs lobbied board members. The Global Fund’s secretariat is launching a range of initiatives to address challenges to grant performance and improve the overall management of grants. Systemwide, the secretariat is (1) reorganizing and strengthening its units, (2) developing a risk assessment mechanism and early warning system, (3) streamlining reporting and funding procedures, (4) working with partners to strengthen recipient capacity, and (5) clarifying guidance for CCMs. However, the board has not clearly defined the CCMs’ role in overseeing grant implementation. The Global Fund has also responded to country-specific challenges in Kenya and Ukraine. To improve grant management and documentation of funding decisions and to better support underperforming grants, the Global Fund took the following actions in 2004: Reorganized the secretariat’s operations unit and increased the number of staff from 118 to 165. For example, it added eight grant manager positions and established regional teams, each with a team leader, so that more than one grant manager is responsible for a set of grants in the countries within a regional team. To better document periodic disbursement decisions, the secretariat added a new position, known as a program officer, to its grant management structure. The secretariat is currently recruiting program officers for each regional team, who are to be responsible for documenting disbursement and other decisions and keeping track of grant milestones. Further, secretariat officials said that the Global Fund is planning to recruit additional grant management staff to conduct increased day-to-day recipient monitoring and assistance. The program officers and the additional grant management staff accounted for most of the increase in staff at the secretariat between 2004 and 2005, according to a Global Fund administrative official. Created the Operational, Partnerships and Country Support Unit to focus on problem grants. According to secretariat officials, this unit—which also includes new positions to liaise with development and technical assistance partners, local fund agents, and CCMs—will enable the secretariat to address grant performance issues before they become serious problems and will thereby better manage risk exposure. For example, the unit could mobilize intervention by high-level recipient government officials, solicit technical assistance from partners, or engage the United Nations Children’s Fund (UNICEF) to procure health commodities until the recipient government can set up a viable procurement system. Strengthened its strategic information and evaluation unit to improve monitoring and evaluation, data reliability, and quality assurance. To enhance the quality and consistency of the data that recipients report, in June 2004 the secretariat issued a monitoring and evaluation “toolkit” developed in cooperation with other donors and development assistance partners. This toolkit guides grant recipients to select consistent indicators to measure progress toward key program goals, such as the number of people with AIDS who were reached with drug treatment or the number of people given insecticide-treated bed nets to prevent malaria. The secretariat has also required attachments to each grant agreement that outline program indicators and the specific activities that enable recipients to meet these indicators and overall program goals. According to Global Fund officials, progress will more easily and consistently be measured when all grants have aligned their indicators and activities to this toolkit. Grant managers are currently working to accomplish this goal with the recipients they cover. According to the Global Fund, these developments have been important in harmonizing monitoring and evaluation approaches among partners at national and international levels and will help simplify country-level reporting to multiple donors by ensuring the use of a common set of indicators to measure interventions. Partners provided training on the toolkit in 2004. According to the secretariat, training is to continue in 2005. Recipients we met with in Thailand confirmed that they had received the toolkit. However, they said that it was not in their native language and therefore was not useful. In March 2004, the board approved establishing a Technical Evaluation Review Group with members from UNAIDS, WHO, and other partners to develop a system for assessing and ensuring data reliability. The group first met in September 2004. According to Global Fund officials, these efforts will result in more systematic reporting and analysis by recipients, the Global Fund, and partners and, consequently, in better comparisons of grants. To strengthen strategic information for monitoring grant performance, in fall 2004, the secretariat created a “data warehouse” that contains information from recipients’ progress reports and disbursement requests, donors, CCMs, and local fund agent assessments. Secretariat staff use the database to prepare “scorecards” that rank grants for the phase-2 renewal process. The secretariat has devised a risk-assessment model and early warning system to identify poorly performing grants and to more systematically alert grant managers when they need to intervene. Because the Global Fund disburses grants to recipients in countries with varying levels of economic development and capacity, its risk-assessment model will incorporate grant size and performance as well as country development and corruption indicators. By tracking key events in the context of grant and country risk, the grant portfolio managers can determine whether recipients have missed important milestones. The early warning system is to generate reports using indicators—for example, time elapsed between disbursements—to flag problems and trigger possible interventions. The system will also incorporate contextual information from country-based partners. When the system identifies slow-moving grants, staff from the secretariat’s Operational, Partnerships and Country Support unit will be able to assess and follow up with the appropriate level of intervention. For example, if a grant recipient in a high-risk country does not submit a progress report and disbursement request at the expected time, the system will alert staff that follow-up is needed. Although the system has not been fully implemented, secretariat officials said that their recent intervention in Tanzania exemplifies the way the system should work. The Tanzania malaria grant was not demonstrating progress after 1 year, as measured by the amount of funds disbursed compared with the amount that the secretariat expected to disburse. After following up with the principal recipient, secretariat staff realized that political infighting—rather than technical limitations—were inhibiting progress of the malaria program: competing groups were vying for control of grant funds and uncertain of how to procure and distribute bed nets to vulnerable groups, such as pregnant women or women with young children. The government decided to give vouchers to members of vulnerable groups to enable them to purchase bed nets at a lower price; however, the ministry of health did not print or distribute the vouchers or specify where they should be distributed. The Director of the Operational, Partnerships and Country Support Unit traveled to Tanzania and met with development partners and high-ranking host government officials to encourage the government to take action. The Global Fund brought in UNICEF, a key development partner in Tanzania, to work with the government’s malaria advisor as well as experts from the Swiss Tropical Medicine Institute to resolve the problems and get the program back on track. In response to concerns that grantee reporting requirements are difficult and time consuming for recipients, grant managers, and local fund agents, the secretariat instituted a new policy that changes the default for reporting from quarterly to every 6 months. In addition, the secretariat is considering new, more streamlined funding mechanisms than the current round-based approach. However, the board has not endorsed these changes, and some board members, including the United States, are opposed to them at this time. To decrease the administrative burden on grantees and to bring its practice more in line with other donor agencies, the secretariat instituted a semiannual reporting policy in July 2004. Some recipients still report quarterly, such as those implementing grants that the secretariat identified as high risk—for example, in countries with limited human resource capacity—while others have the option of using quarterly disbursements to meet their needs—for example, as a hedge against currency fluctuations. However, this policy change did not require board approval, and some board members, including the United States, do not support it. Although the Global Fund strives to be a funding mechanism that seamlessly fits into many country programs by providing additional funding where needed, it recognizes that its current practice of financing grants through rounds can disrupt countries’ planning and time lines and strains recipient capacity. In addition, some associated with the Global Fund said that rounds might lead CCMs and recipients to concentrate their energy on developing new proposals rather than implementing existing grants and that repeated rounds add greatly to the secretariat’s workload. A document submitted to the board by the secretariat stated that although the round- based grant approval system worked well for launching the Global Fund and identifying countries that submitted strong proposals, this system forced recipients to adapt their planning cycles to those of the Global Fund (rather than building on preexisting planning cycles), encouraged the submission of smaller proposals, and left a considerable amount of time between proposal submission and approval. This document presented several options for the board, such as creating two continuous funding streams—one for governments and another for civil society recipients. For example, government applicants could submit their national strategic plans for the coming years, highlighting financing gaps and facilitating integration of Global Fund financing with existing planning and budgeting systems, such as sectorwide approaches. According to the document, this approach would create incentives for CCMs to improve and accelerate the disbursement of funds and would ease the secretariat’s workload, allowing secretariat staff to spend more time managing grants and less time negotiating grant agreements. The board has not set time frames for further discussing this issue. According to U.S. board members, the board has not yet fully discussed or approved these changes, and a majority of board members oppose them at this time. Because most grant performance problems are associated with limited capacity at the country level, where the Global Fund has no presence and plays no part in program implementation, the Global Fund relies on its technical partners to provide technical expertise to grant recipients. Although the partners we spoke with expressed their strong support for the Global Fund, they also voiced concern that they have not received additional resources to provide the technical support that grant recipients have requested. The Global Fund and partners reported that partners provided essential support that strengthened recipients’ capacity to prepare applications for Global Fund financing and helped address the underlying problems that affected grant performance. For example: UNAIDS, a key technical partner, has added about 30 monitoring and evaluation officers in various countries who are available to support CCMs in preparing grant performance reports for phase-2 renewals. UNAIDS has also intensified its capacity-building support at the country level. Several WHO departments have provided critical technical support. For example, WHO’s Stop TB unit supported 50 countries when they developed their applications for Global Fund financing. The Global TB Drug Facility worked with recipients in eight countries to identify and resolve procurement and supply management bottlenecks. WHO’s HIV/AIDS Department helped to develop comprehensive technical support plans for accelerating the scale-up of antiretroviral therapy and prevention services in 15 to 20 countries. In addition, according to the Global Fund, it collaborated closely with WHO’s Roll Back Malaria Department in 2004 to incorporate into existing grants new, more effective malaria treatments that use artemisinin-based combination therapy. USAID and the U.S. Centers for Disease Control and Prevention (HHS/CDC) are assisting grantees in a number of countries. For example, USAID is supporting TB grants in numerous ways, including providing training on procuring and managing medical supplies, addressing country-level financial management constraints, and conducting human resource assessments to determine existing capacity needs. In another instance, HHS/CDC is assisting one grantee in reporting and monitoring activities and revising project funds to improve grant implementation. HHS/CDC has also coordinated the implementation and monitoring of activities under another country’s TB grant, participating in supervisory visits to districts to assess their progress and compiling and submitting quarterly reports for the TB grant to the ministry of health. UNAIDS and WHO officials in Geneva and in the field expressed strong support for the Global Fund but consistently raised concerns about their organizations’ ability to respond to increasing numbers of requests from grant recipients for help in addressing issues underlying performance problems. For example, although UNAIDS recently added about 30 monitoring and evaluation officers in its country and regional offices, officials said that the agency’s resources are being stretched thin and that it cannot provide assistance to all Global Fund grant recipients. Likewise, WHO officials said that its regional and country staff are dedicated to providing technical assistance, but because WHO is not funded to support Global Fund grants it is often unable to respond to all recipients’ requests for help. According to officials from WHO’s HIV/AIDS, Stop TB, and Roll Back Malaria departments, the Global Fund works under the assumption that UN agencies have a mandate to provide technical assistance. However, unless it gets more money from its member countries for this purpose, WHO does not have the resources to keep up with the massive increase in need for technical assistance owing to Global Fund grants. In addition, WHO officials pointed out that the Global Fund does not encourage recipients in African countries to take advantage of WHO’s Global Drug Facility to procure quality-assured TB drugs at the cheapest prices available; instead, the Global Fund encourages competition and reliance on local industry. To strengthen accountability in recipient countries, the board has clarified some roles and responsibilities for the CCMs. The board has stated that CCMs are responsible for overseeing grant implementation and are therefore to play an important role in deciding whether grants should be renewed for phase-2 funding. To enhance and clarify CCM functioning, the secretariat in March 2005, convened regional workshops in Zambia and India on CCM best practices. In addition, to improve communication between the Global Fund and CCMs, the secretariat is compiling contact information for all CCM members. This information will enable it to communicate directly with the members instead of relying on the CCM chairperson to disseminate information. Secretariat officials acknowledged that no formal studies conclusively demonstrate a link between CCM functioning and grant performance. However, the Global Fund’s March 2005 report stated that many of the (then) 27 grants eligible for phase-2 funding benefited from several factors, including full levels of participation by CCM members in that body. Further, the report stated that low levels of participation and involvement by CCM members were a key factor in poor performance. Secretariat officials stated that they plan to initiate a study at the end of 2005 to systematically investigate links between CCM functioning and grant performance, given that a number of additional grants will then have neared the 2-year mark and gone through the phase-2 decision process. In response to findings from several earlier studies commissioned by the Global Fund on CCM functioning, in November 2004, the board agreed on specific requirements for CCMs. However, it has not clearly defined CCMs’ role in monitoring grant implementation. In April 2005, the board directed CCMs to develop tools and procedures for overseeing grants, stating generally that these tools and procedures “should include but need not be limited to” a list of five activities such as recording key oversight actions and developing a work plan that “could include” site visits. The board noted that because CCMs vary from country to country, these guidelines can be adapted and their application paced as needed. According to secretariat staff, the board has not reached consensus regarding CCMs’ oversight role because some members want clear, specific requirements for CCMs while others prefer the more general guidelines. In addition, in 2004, the board agreed on a checklist for measuring CCM performance that focuses mostly on the makeup of the CCMs, participation and communication among members, and governance and management. However, the checklist did not include parameters for measuring the effectiveness of CCMs in overseeing grant performance. Participants at the Zambia workshop recommended that the secretariat develop more specific guidelines defining the oversight role of the CCM. The Global Fund secretariat intervened in at least two countries in response to grant performance problems. For example, in Kenya, the secretariat intervened in 2004 at the request of donors and board members to encourage the principal recipient to hold regular meetings with subrecipients and designate staff to administer and monitor the grants. The secretariat also intervened in Kenya to improve coordination by facilitating new CCM procedures, such as designating multiple minute-takers to ensure the accuracy of the minutes and making sure that minutes are circulated promptly. According to one CCM member, two additional people now take notes at each meeting; however, the minutes are not being circulated in advance of the next meeting. In commenting on a draft of this report, U.S. government officials said that, despite these interventions, problems persist. For example, they said that CCM meetings in Kenya are too infrequent and poorly prepared; decisions are made outside of the meetings; and the minutes are often inaccurate. In Ukraine, the Global Fund suspended three HIV/AIDS grants in January 2004 after investigating irregularities in the principal recipients’ procurements that development partners had brought to its attention a month earlier. The secretariat had also found that after nearly 12 months of a 24-month program, the recipients had spent less than 4 percent of the total 2-year amount for the three grants. The Global Fund had disbursed a total of $7.1 million to the principal recipients, from whom it obtained $6.3 million in reimbursements. In March 2004, the secretariat signed an agreement with a new principal recipient to continue the HIV/AIDS mitigation activities specified in the original grants; in addition, it transferred $300,000 to this entity to avoid interrupting ongoing programs. The Global Fund’s mandate reflects inherent tensions. On the one hand, the Global Fund is to function solely as a funding entity with no implementing role and to encourage recipient country bodies such as the CCM to be responsible for implementing and overseeing grants. On the other hand, it is to disburse funds rapidly while also ensuring that recipients are able to account for expenditures and produce measurable results in addressing the three diseases. In seeking to balance these tensions and further improve its performance, the Global Fund has revised—and continues to revise—its processes. Some systemwide changes require board approval or will take time to fully implement, whereas others can be implemented relatively quickly. Capacity in recipient countries, guidance, coordination, planning, and contracting and procurement are pivotal to grant performance and therefore merit continued attention. However, local fund agents’ frequent lack of expertise in assessing these factors, and many recipients’ limited monitoring and evaluation capabilities, raise questions about the accuracy and completeness of the information that the secretariat uses to make its periodic disbursement and funding renewal decisions. In addition, despite recent improvements, the Global Fund’s lack of consistent, clear, and convincing documentation of its funding decisions may hamper its ability to justify these decisions to donors and other stakeholders, in accordance with its principles of transparency and accountability. To ensure that all funding decisions are clearly based on grant performance and reliable data, it is critical that the Global Fund resolve these issues in a timely manner. To improve the quality of the information on which the Global Fund bases its funding decisions and the documentation explaining these decisions, we recommend that the U.S. Global AIDS Coordinator work with the Global Fund’s Board Chair and Executive Director to take the following three actions: complete efforts to ensure that local fund agents have the necessary expertise to evaluate performance data on disease mitigation that recipients submit, continue to work with development partners to strengthen the quality and consistency of that data by enhancing recipients’ capacity for monitoring and evaluating their financial and program-related activities, and continue efforts to clearly document the Global Fund’s reasons for periodically disbursing funds and renewing grant agreements. We requested comments on a draft of this report from the Executive Director of the Global Fund, the Secretaries of State and HHS, and the Administrator of USAID, or their designees. We received formal comments from the Global Fund as well as a combined formal response from State, HHS, and USAID (see apps. III and IV). The Global Fund concurred with the report’s conclusion and recommendations and noted steps it is taking to improve documentation of grant performance such as organizing regional training of principal recipient staff to improve the quality of their reporting; defining universal and detailed performance indicators for each grant to more systematically track performance; and tailoring grant oversight and terms of reference for local fund agents based on grant risk. State, HHS, and USAID largely concurred with the report’s conclusions but did not comment on the recommendations in their formal response. Both the Global Fund and the U.S. agencies also submitted informal, technical comments, which we have incorporated into the report as appropriate. We are sending copies of this report to the Global Fund Executive Director, the U.S. Global AIDS Coordinator, the Secretary of HHS, the Administrator of USAID, and interested congressional committees. Copies of this report will also be made available to other interested parties on request. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. In May 2003, the President signed a law directing the Comptroller General to monitor and evaluate projects supported by the Global Fund. This report reflects our review of grants that the Global Fund began disbursing before the beginning of 2004—that is, grants that have had at least 1 year to perform. In this report, we (1) describe the Global Fund’s process for managing grants and disbursing funds, (2) identify factors that have affected grant performance, (3) review the basis for, and documentation of, the Global Fund’s performance-based funding, and (4) describe the Global Fund’s recent refinements for managing grants and improving their performance. To describe the Global Fund’s process for managing grants and disbursing funds, we reviewed Global Fund documents, including The Global Fund Operational Policy Manual and related guidance documents; A Force for Change: The Global Fund at 30 Months; The Global Fund to Fight AIDS, Tuberculosis and Malaria: Annual Report 2002/2003; and Investing in the Future: The Global Fund at Three Years. We also interviewed Global Fund officials in Washington, D.C., and in Geneva, Switzerland. To identify factors affecting grant performance, we conducted three types of analysis. First, we selected 13 countries that had grants with a first disbursement on or before December 31, 2003, to allow for at least 1 year of performance, and that had grants covering more than one principal recipient. In addition, all but 4 of these countries had grants covering more than one disease. We reviewed Global Fund dossiers for 38 grants to recipients in these countries and categorized reasons given for deviation from performance targets. (Our initial scope included the 45 grants to these countries, but for 7 of the grants there were no disbursement requests available during the period of our review.) We found 75 progress reports/disbursement requests and 51 local fund agent assessments associated with 24 of the 38 grants. These 38 grants represent 29 percent of the 130 grants that had received a first disbursement by the end of 2003. Starting with the grant’s second disbursement, we included all disbursement requests from each grant that were available on the Global Fund Web site as of November, 2005, and a few that we received subsequently from the Global Fund. We requested full disbursement dossiers from the Global Fund. These dossiers contained principal recipients’ progress reports and cash-flow/expenditure data, local fund agents’ reviews of the recipients’ information and recommendations about further disbursements, and, in most cases, additional documents such as correspondence between the Global Fund secretariat and the principal recipient. Using this information, we coded reasons given for deviation from grantees’ agreed-upon performance targets into 1 of about 30 categories. We grouped this information into 5 major categories— resources or capacity; coordination; programmatic problems, needs, or changes; procurement; and factors beyond recipients’ control. Within these categories, we developed specific subcategories such as guidance, decisions or plans not made, done, or available; signing of contracts or agreements delayed or not done; and limited trained human resources. As in any exercise of this type, the categories developed can vary when produced by different analysts. To address this issue, two GAO analysts reviewed a sample of the progress reports and independently proposed categories, separately identifying major factors and then agreeing on a common set of subcategories. We refined these subcategories during the coding exercise that followed. We then analyzed the reasons for deviations from all of the recipients’ progress reports and placed them into one of the subcategories. When information in the progress reports was insufficient to determine how to code a reason, we consulted the local fund agents’ reports. We tallied the counts in each subcategory and identified the subcategories mentioned in the greatest number of grants. The information in the disbursement requests varied in detail and quality. The two analysts, together with a methodologist, therefore discussed and documented categorization criteria and procedures throughout the analysis, and the methodologist reviewed the entire analysis as a final step. As a validity check on our document analysis and to identify frequently cited factors that affected grant performance, we compared the information in the subcategories mentioned in the most grants to information available from our fieldwork (see below) and determined that both sources of information reported similar findings. In addition, we reviewed the 75 progress reports/disbursement requests associated with these 38 grants and tallied the total number of targets in each request. We ranked each target using the same numeric rating system the Global Fund uses for phase 2 (see fig. 3). Because many of the total 1,125 targets were nonnumeric (e.g., developing a monitoring and evaluation plan), we did the following: (1) if the principal recipient clearly met the target, we ranked the target as met (“meeting or exceeding expectations”) and included it with the numeric targets that fell into that category; (2) if the principal recipient clearly showed no progress toward meeting the target, we ranked it as not met (“unacceptable”) and included it with the numeric targets in that category; and (3) if the principal recipient partially met the target, we gave it a separate ranking—“partially met nonnumeric target,” and characterized it as partially met, along with the partially met numeric targets. To arrive at percentages for the targets in each category, we first calculated the percentage for each progress report/disbursement request and then averaged the percentages for the category from all the reports. Because the number of targets in each report varied greatly, we averaged the percentages rather than the numbers of targets to ensure that each report was given equal weight. We excluded from our calculations those few targets for which the information available was not adequate to determine whether or to what extent the target was met. Second, we reviewed documents obtained from field visits to four countries—Indonesia, Kenya, Thailand, and Zambia—and interviewed a wide variety of government, civil society, and bilateral and multilateral development officials in these countries involved in grant implementation or oversight. All four of these countries received more than $36 million in committed funds for several grants that covered more than one disease, and three of them (Kenya, Thailand, and Zambia) also have grants that cover both government and civil society recipients. In addition, we interviewed officials from the Global Fund, World Health Organization (WHO), and the Joint UN Programme on HIV/AIDS (UNAIDS) in Washington, D.C., and Geneva, Switzerland. Finally, to determine whether the percentage of funds disbursed for each grant (after the first disbursement) and the timeliness of the disbursements were associated with grant characteristics such as type of principal recipient, grant size, or disease targeted, we analyzed 130 grants with first disbursements on or before December 31, 2003. (See app. II for a more detailed discussion of this methodology.) To assess the reliability of the Global Fund’s data, we (1) posed a set of standard data reliability questions to knowledgeable agency officials, (2) performed basic electronic reasonableness tests, and (3) interviewed officials about a few small anomalies that we found during our analysis. We found only one minor limitation, namely that disbursement dates were not reported for less than 5 percent of the disbursements. Based on our assessment, we determined that the data were sufficiently reliable to generate descriptive statistics about the program, and to be used for advanced statistical modeling work. To review the basis for, and documentation of, the Global Fund’s performance-based funding, we examined Global Fund documents— including The Global Fund Operational Policy Manual and related guidance documents, the dossiers for the 38 grants that had a first disbursement on or before December 31, 2003, and documents supporting Global Fund decisions to continue or discontinue funding 25 of 28 grants that had reached their phase-2 renewal point and been reviewed by the secretariat as of March 31, 2005. We also analyzed local fund agents’ assessments to determine how often grant managers documented disbursement decisions. In addition, we interviewed Global Fund officials in Washington, D.C., and in Geneva, Switzerland, and officials from the Departments of State and Health and Human Services (HHS), and the U.S. Agency for International Development (USAID). To describe the Global Fund’s recent refinements for managing grants and improving their performance, we reviewed Global Fund documents including The Global Fund Operational Policy Manual and related guidance documents and organization charts, and job descriptions for the positions of local fund agent officer, country coordinating mechanism (CCM) coordinator, program officer, and fund portfolio manager (grant manager). We also examined Global Fund papers, including the Discussion Paper on the Core Business Model for a Mature Global Fund; Update on New Measures of Performance and Early Warning System; Update on the Global Fund Information Management Platform; Revised Guidelines on the Purpose, Structure and Composition of Country Coordinating Mechanisms and Requirements for Grant Eligibility; and Performance Standards and Indicators for CCM Monitoring. In addition, we reviewed the report Investing in the Future: The Global Fund at Three Years and the Monitoring and Evaluation Toolkit for HIV/AIDS, Tuberculosis, and Malaria. We also reviewed documents from a March 2005 CCM workshop conducted in Zambia. Further, we reviewed documents obtained during fieldwork in Kenya, conducted follow-up correspondence with CCM members in Kenya, and reviewed Global Fund documents concerning grants to Ukraine. Additionally, we interviewed officials from the Global Fund, the Departments of State and HHS, USAID, UNAIDS, and WHO. We conducted our work from June 2004 through March 2005, in accordance with generally accepted government auditing standards. This appendix provides descriptive information related to the 130 grants that had received their first disbursements from the Global Fund on or before December 31, 2003, and the results of analyses we undertook to determine whether some types of grants had disbursed a larger percentage of their 2-year funds than others and to estimate the number of disbursements that were made in a timely fashion. Disbursements refer to those from the Global Fund to the principal recipient, not from the principal recipient to subrecipients. Data were current as of February 4, 2005. Table 2 shows selected characteristics of the 130 grants we reviewed. Table 3 shows the performance of different grant types with respect to receiving disbursements. Some of these characteristics varied by type of grant, although many of the differences were not significant. Ministries of finance, on average, made a smaller number of disbursements and disbursed a lower percentage of their grants, although they also had made their first disbursements later and therefore had less time to make disbursements. Similarly, larger grants disbursed lower percentages of their grant amounts than smaller grants; but again, differences in the time elapsed make it difficult to know whether these differences reflect anything more than the time they had to make disbursements. Differences in disbursements, percentages disbursed, and average days since the first disbursement were insignificant across grants dedicated to the different types of diseases. To determine whether the differences in the percentage disbursed varied by type of grant, we used ordinary regression techniques. The Global Fund also analyzed grant disbursements, reporting in March 2005 that disbursements are indicative of performance. Our analysis differed from the Global Fund’s in that we looked at the percentage of the 2-year grant amount disbursed since the first disbursement, whereas the Global Fund looked at the percentage that was disbursed relative to the percentage that was expected to be disbursed since the first disbursement. Because the actual effect of time turns out to be nonlinear—meaning that although time elapsed since the first disbursement has a significant effect on the percentage disbursed, that effect decreases over time—we estimated the effect of time directly before estimating differences in the percentage disbursed across different types of grants. We fit bivariate regression models (models 1-4 in table 4) to estimate and test the significance of the gross effects of time since first disbursement, principal recipient type, disease type, and grant size (or the effects of each of these factors, ignoring all others) and a multivariate regression model (model 5) to estimate the net effects of each (or the effects of each after controlling for the effects of the others). Table 4 shows these results. Model 1 in table 4 shows the effect that time, or days between the first disbursement and February 2, 2005, has on the percentage of the grant disbursed. The significant time-squared term indicates that the effect of time is nonlinear. This nonlinearity makes the interpretation of the time coefficients somewhat less straightforward, but the positive time coefficients in table 4 indicate, not surprisingly, that grants that have had more time to make disbursements have disbursed a larger percentage of the 2-year grant amount remaining after the first disbursement. The negative coefficients associated with the squared term means that over time, time is less of a factor, or that the difference in the percentage disbursed between 100 days and 200 days is greater than the difference between 300 and 400 days. The sizable F-statistic at the base of the column for model 1 attests to the significance of the effect of time, and this nonlinear effect of time explains 17 percent of the variation in the amount disbursed of the total remaining after the first disbursement. Models 2, 3, and 4 estimate the gross effects of principal recipient type, disease type, and grant size on the percentages disbursed, after subtracting the amount of the first disbursement. These differences are estimated using dummy variables to indicate the differences between the grant categories named in the table and the omitted referent category (civil society for principal recipient type, HIV/AIDS for disease type, and less than $2 million for grant size). The constants for each of these models reflect the percentages disbursed for grants in the referent categories, and the coefficients indicate the differences between the percentages for the categories in the table and the percentages for the referent categories. The overall percentage disbursed was 21 points lower for ministry of finance grants, and roughly 15 points lower for ministry of health grants and other government grants, than for civil society grants. These differences reflect the results of ignoring, rather than controlling for, differences in time since first disbursement, grant size, and disease targeted. Model 5 estimates all of these effects simultaneously and, as such, provides us with net effect estimates, or estimates of each effect, controlling for the others. It shows that the time each grant has had since its first disbursement is the principal determinant of the amount disbursed. Grants that made their first disbursement earlier disbursed larger amounts of their remaining 2-year awards. After these effects were controlled for, the differences between principal recipient types in the percentages disbursed became smaller than they appeared before controls; and the only difference, which had appeared marginally significant before controls, became insignificant afterward. We found no significant differences by disease type or grant size when looking at either the gross or net effects. We also looked at the extent to which disbursements were made in a timely fashion (i.e., in 135 days or less). As table 5 shows, 35 percent of all disbursements were timely, or within 135 days, and the extent to which disbursements were timely was greater for later disbursements than for earlier disbursements. The number of timely disbursements is too small at most stages for us to determine whether timeliness varies across grant types. In addition to the person named above, Candace Carpenter, Martin de Alteriis, David Dornisch, Cheryl Goodman, Kay Halpern, Bruce Kutnick, Reid Lowe, Jerome Sandau, Douglas Sloane, Julie Trinder, and Tom Zingale made key contributions to this report.
The Global Fund to Fight AIDS, Tuberculosis and Malaria--established as a private foundation in January 2002--is intended to rapidly disburse grants to recipients, including governments and nongovernmental organizations. The Global Fund has signed over 270 grant agreements and disbursed more than $1 billion. Governments provide most of its funding; the United States has provided almost one-third of the $3.7 billion the Global Fund has received. In May 2003, the President signed legislation directing the Comptroller General to monitor and evaluate Global Fund-supported projects. GAO reviewed grants that the Global Fund began disbursing before January 2004. This report (1) describes the Global Fund's process for managing grants and disbursing funds, (2) identifies factors that have affected grant performance, (3) reviews the basis and documentation of performance-based funding, and (4) notes recent refinements of Global Fund processes. Global Fund policy is to manage grants in a transparent and accountable manner, disbursing funds to recipients based on their demonstrated performance as measured against agreed-on targets. In implementing this performance-based funding system, Global Fund officials are to periodically assess whether the grant's principal recipient has made sufficient progress to warrant its next disbursement. After 2 years, the Global Fund is to determine whether to continue funding the grant for an additional 3 years. In making an assessment, officials consider several information sources, including the recipient's reports on its performance and expenditures and an independent agent's verification of the recipient's reports. Recipient countries' capacity to implement grants has been an underlying factor in grant performance, according to Global Fund and other knowledgeable officials. These officials, as well as principal recipients, also cited guidance, coordination, planning, and contracting and procurement as factors associated with challenges or successes in grant performance. For example, recipients in three countries reported that they could not meet their targets because they had not received national treatment guidelines. However, several grant recipients reported that, under certain circumstances, Global Fund guidance allowed them to quickly redirect funds, thereby enabling them to meet their targets. GAO found problems associated with the information sources that the Global Fund uses in making performance-based funding decisions. For example, the limited monitoring and evaluation capabilities of many recipients raise questions about the accuracy of their reporting. Moreover, the Global Fund has not consistently documented its determinations that recipients' performance warranted additional funding. For instance, the Global Fund's documentation did not explain its decisions to disburse funds to some recipients who reported that they had met few targets. Further, the Global Fund does not track or publicly document denied disbursement requests. The Global Fund is taking steps to address challenges to grant performance and improve the overall management of grants, including (1) reorganizing and strengthening its staff; (2) developing a risk assessment mechanism and early warning system to identify poorly performing grants; (3) streamlining reporting and funding procedures; (4) working with partners to strengthen recipient capacity; and (5) clarifying certain guidance for the country coordinating mechanism--the entity in each country responsible for developing grant proposals, nominating grant recipients, monitoring grant implementation, and advising the Global Fund on the viability of grants for continued funding. However, the Global Fund has not clearly defined the role of these entities in overseeing grant implementation.
Madam Chairman and Members of the Subcommittee: We are pleased to be here today to assist the Subcommittee in its review of the Internal Revenue Service’s (IRS) tax debt collection practices. Every year IRS successfully collects over a trillion dollars in taxes owed the government, yet at the same time tens of billions more remain unpaid. As Congress works to balance the federal budget, these unpaid taxes become increasingly important, as do IRS’ efforts to collect them. While most taxpayers voluntarily pay their taxes on time, some are unable or unwilling to do so. It is this latter group whom IRS must deal with in its efforts to collect delinquent taxes. In doing so, IRS faces several significant challenges, including a lack of accurate and reliable information on either the makeup of its accounts receivable or the effectiveness of the collection tools it has at its disposal, as well as receivables that are often years old, out-of-date collection practices, and antiquated technology. It is these problems and challenges—and their results—that led us, the Office of Management and Budget (OMB), and IRS to recognize IRS’ accounts receivable as a high-risk area. To address these challenges, significant changes are needed in the way IRS does business, but IRS cannot do it alone. Recently, the IRS Commissioner has compared IRS to financial service organizations such as banks, credit card companies, and investment firms. Like these organizations, IRS processes data, maintains customer accounts, responds to account questions, and collects money owed. We agree with the Commissioner’s functional comparison and believe that, while there are significant differences between IRS and these private sector businesses, IRS may benefit from using private collectors as a part of its portfolio of collection programs, and it is reasonable to assume that IRS could learn from their best practices as it works to resolve long-standing problems with its debt collection activities. My testimony today, which is based on past reports and ongoing work, discusses the debt collection challenges facing IRS and the potential benefits of involving private parties in the collection of tax debts. A number of long-standing problems have complicated IRS’ efforts to collect its accounts receivable. Of foremost concern is the lack of reliable and accurate information on the nature of the debt and the effectiveness of IRS collection tools. Access to current and accurate information on tax debts is essential if IRS is to enhance the effectiveness of its collection tools and programs to optimize productivity, devise alternate collection strategies, and develop programs to help keep taxpayers from becoming delinquent in the first place. Without reliable information on the accounts they are trying to collect and the taxpayers who owe the debts, IRS agents generally do not know whether they are resolving cases in the most efficient and effective manner, and may spend time pursuing invalid or unproductive cases. Of the approximately $200 billion currently in the IRS accounts receivable inventory, IRS data shows that approximately $63 billion represents taxes that, although they have been assessed, may not be valid receivables, but rather are “place markers” for compliance actions. For example, under IRS procedures, when IRS’ information return matching process identifies a taxpayer who received a Form W-2 but did not file a tax return, IRS creates a return for the taxpayer. Generally, this is done using the standard deduction and single filing status, and often results in the taxpayer owing taxes. IRS then sends balance due notices to the taxpayer reflecting the amount of taxes owed as calculated by IRS—to encourage the taxpayer to file a return with the correct tax amount owed. If the taxpayer does not subsequently file the return, IRS records the amount it calculated as taxes due and generates a receivable. However, when contacted by IRS collection staff, the taxpayer may demonstrate that either no tax or a lesser amount of tax is actually owed. To more efficiently account for and collect money actually owed to the government, IRS would have to be able to differentiate these IRS-calculated accounts from those where there is an acknowledged balance due. System (ERIS) and other computerized systems. However, IRS has noted in the past that there are questions regarding the accuracy of the data produced by these systems. The age of the debts in IRS’ accounts receivable inventory is also problematic. IRS’ inventory of tax debt includes delinquent debts that may be up to 10 years old. This is because there is a 10-year statutory collection period, and IRS generally does not write off uncollectible delinquencies until this time period has expired. As a result, the receivables inventory includes old accounts that may be impossible to collect because the taxpayers cannot be located, or are deceased, or the corporations are defunct. Of the over $200 billion total receivables inventory as of September 30, 1995, IRS data show that about $38 billion was owed by either deceased taxpayers or defunct corporations. Out of a total of 460 accounts receivable cases that we reviewed in our audit of IRS’ 1995 financial statements, IRS identified 258 as currently not collectible; 198 of these cases represented defunct corporations, while the remaining 60 cases represented entities that either could not pay or could not be located. These cases represented $12 billion of the $26 billion included in accounts greater than $10 million. The age of the receivable does not reflect the additional time it took for IRS to actually assess the taxes in the first place. Enforcement tools, such as IRS’ matching programs and tax examinations, may take up to 5 years from the date the tax return is due until IRS finally assesses the additional taxes. This reduces the likelihood that the outstanding amounts will be collected. The age factor significantly affects the collectibility of the debt because, as both private and public sector collectors have attested, the older the debt, the more problematic collection becomes. Because of these and other factors, IRS considers many of the accounts in the inventory to be uncollectible. Specifically, IRS has estimated that only about $46 billion of the $200 billion inventory of tax debt as of September 30, 1995, was collectible. who have taxes withheld from their wages. Taxpayers with nonwage income are required to calculate their projected income and make estimated tax payments to IRS during the year. According to IRS data, the average tax delinquency for taxpayers with primarily nonwage income was about 4 times greater than that for wage earners—$15,800 versus $3,600. IRS data also show that, at the end of fiscal year 1995, about $75 billion, or 74 percent of the $101 billion in IRS’ inventory of tax debts owed by individuals, was owed by taxpayers whose income was primarily nonwage. IRS’ collection process was introduced several decades ago, and although some changes have been made, the process generally is costly and inefficient. The three-stage collection process—computer-generated notices and bills, telephone calls, and personal visits by collection employees—generally takes longer and is more costly than collection processes in the private sector. While the private sector emphasizes the use of telephone collection calls, a significant portion of IRS’ collection resources is allocated to field offices where personal visits are made by revenue officers. IRS has initiated programs and made procedural changes to speed up its collection process, but historically it has been reluctant to reallocate resources from the field to the earlier, more productive collection activities. IRS’ fiscal year 1997 budget request states that, although “these positions still comprise the lion’s share of IRS’ enforcement efforts, they also represent on the margin the least efficient use of IRS resources.” Due to budget cuts, however, IRS is in the process of temporarily reassigning about 300 field staff to telephone collection sites to replace temporary employees who were terminated. Upgrading its computer systems is another challenge facing IRS. IRS is in the midst of a massive long-term modernization effort—Tax Systems Modernization (TSM)—that if successful would, among other things, help IRS to better collect tax debts by providing its collectors with on-line access to information they need, when they need it. Modernized systems would also help provide the management information needed to evaluate the effectiveness of collection tools and the ability to adopt flexible and innovative collection approaches. Existing IRS computer systems do not provide ready access to needed information and, consequently, do not adequately support modern work processes. Although TSM is not expected to be completed any time in the near future, IRS has started to automate some collection activities. For example, IRS is currently developing an automated inventory delivery system that is intended to direct accounts, based on internally developed criteria, to the particular collection stage where they can be processed most efficiently and expeditiously. This system, which IRS plans to test in July 1996, is intended to move accounts through the collection process faster and cheaper than under the current system. Another effort under way involves the automation of certain field collection tasks. These tasks, like many in IRS, have for years involved the manual processing of paper, which has resulted in IRS field collection employees spending significant amounts of time on routine administrative duties. The Integrated Collection System (ICS) is a computer-based information system that is intended to automate some of the labor-intensive tasks performed by field revenue officers. While this effort is not a major technological advancement, it will be a step toward helping IRS employees be more productive by spending their time on more effective and efficient collection-related activities. Basic automation is a given in today’s business environment, and if IRS is to operate like a private sector business as it says, systems that automate basic work processes are a must. According to IRS, implementing this system in two pilot districts has resulted in increased collections, faster case closings, and less time spent on each case. IRS employees using the system were also very supportive of it and enthusiastic about its benefits. The system is currently operating in six districts, and IRS plans to roll it out in three additional districts this year. According to IRS, further implementation is dependent on future funding and final measurements of productivity. Many private and governmental entities are involved in debt collection. We believe that these entities offer the potential for improving IRS debt collection practices. For example, as is being tried currently, there may be a role for private debt collectors in collecting federal tax debt. the use of private law firms and debt collection agencies to help collect delinquent tax debts. In May 1993, we recommended that IRS test the use of private debt collectors to support its collection efforts. IRS had looked into testing the use of private collectors as early as 1991, but had not carried through with any of its plans. IRS issued a request for proposals from prospective participants in the pilot program on March 5, 1996. The proposals were due by April 12, 1996, and the pilot is to last 1 year. Under the pilot, the private collectors are to attempt to first locate and then contact delinquent taxpayers, remind them of their tax debt, and inform them of available alternatives to resolve the outstanding obligation. An important limitation of the pilot is that the private collectors will not be able to actually collect the taxes owed; rather, the intent is for them to facilitate information exchange and contacts between IRS and the taxpayer. There is an OMB policy determination and IRS Office of Chief Counsel guidance that specify that the collection of taxes is an inherently governmental function that must be performed by government employees. Private collectors, however, can perform collection-related activities, such as locating taxpayers and attempting to secure promises to pay. In addition, the private collectors will face some of the same problems in working the pilot cases that IRS employees face. First, these are not new cases. All will have already gone through much of IRS’ collection process, and in some cases, the entire process. This means, in effect, that some of the cases may have been in the accounts receivable inventory for up to 10 years, and some may involve even earlier tax years. The cases may also contain some of the other information problems we discussed previously. improve its collection programs. The private collectors will be bound by the same taxpayer rights and disclosure considerations as apply to IRS employees. Other useful information could also be obtained from the pilot. For example, IRS could learn what actions are most productive based on the type of case, type of taxpayer, and age of the account. For the information to be useful to IRS and Congress in evaluating the pilot, however, the sample of cases must be drawn and the data captured in such a way that the appropriate analyses and tests can be done. We have not analyzed IRS’ methodologies for selecting its sample of cases or for evaluating the pilot. IRS faces many challenges in its efforts to improve the management and collection of its accounts receivable. The key is to find solutions to the major problems we previously discussed and their underlying causes that affect IRS’ ability to collect more delinquent taxes. Solutions will take time because the problems are pervasive and may involve all IRS functions and processes. Currently, IRS is making some changes to its collection process as a part of its modernization effort. We reported in the past that private collectors and states that are engaged in collection activities similar to IRS’ may provide some best-practice examples for IRS to use in benchmarking its efforts. Many states use private collectors to supplement their own collection programs, thereby taking advantage of private sector capability in managing receivables, gaining access to better technology, or avoiding the expense of hiring permanent staff. Although many states—including 33 of the 43 states that responded to our survey—have used private collectors, their experiences have varied widely. collections from its proposed pilot, but not necessarily a significant windfall. IRS may, however, benefit and learn from the private companies’ collection techniques and use of technology. IRS faces significant challenges in collecting tax debts. As we have previously recommended, because the problems are pervasive across all IRS activities and processes, IRS needs to develop a detailed and comprehensive long-term plan to deal with the major challenges it faces and their interrelationships. With such a plan, IRS could better assure itself and Congress that it is on the right track and thereby better position itself to obtain the backing and support it needs. Key to improving IRS’ collections of tax debt is the need for up-to-date and accurate information as well as modern equipment and technology. IRS also needs to determine the most cost-effective ways to prevent delinquencies from occurring, as well as what it can do in its return, payment, and compliance processes to reduce the number of invalid accounts entering the collection process. To stay competitive in today’s business environment, IRS must continually strive to improve collections by testing new and innovative approaches. Madam Chairman, this concludes my prepared statement. I would be pleased to answer any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Internal Revenue Service's (IRS) tax debt collection practices. GAO noted that: (1) each year, billions of dollars in taxes remain unpaid; (2) impediments to improving tax debt collection include the lack of accurate and reliable accounts receivable data and effective collection tools and programs, a backlogged receivables inventory, outdated collection processes, and antiquated computer systems; (3) some accounts receivable may be overstated, not valid, or owed by deceased or unlocatable taxpayers and defunct businesses; (4) IRS is modernizing its information and processing systems, but these actions will not be completed for several years; (5) although IRS use of private debt collectors could increase tax collections by locating and encouraging taxpayers to pay their delinquent taxes, they cannot actually collect taxes; (6) some states have successfully used private debt collectors to increase their delinquent tax collections; (7) IRS accounts receivable have been designated a high-risk area, but IRS cannot make major changes in its business operations by itself; (8) IRS needs a comprehensive strategy to guide its efforts to improve tax debt collections, starting with having accurate and reliable information; and (9) IRS could adopt private industry practices and use private debt collectors in some collection-related activities.
The discussion summarized in this report should be interpreted in the context of two key limitations and qualifications. First, the panel was only an initial step in a possible long-term, evolving effort to develop and sustain discussion on ATC modernization. As such, it brought together generalists, rather than specialists, to address broad themes and consider how to organize a more comprehensive approach. Because our scope was limited, we could not include a large number of leading experts, institutions, and networks involved in specialized efforts. Furthermore, although many points of view were represented, the panel was not representative of all potential stakeholders. Second, even though we, in cooperation with the National Academies, conducted preliminary research and heard from national experts in their fields, a day’s conversation cannot represent the current practice in this vast arena. More thought, discussion, and research are needed to develop greater agreement on what we really know, what needs to be done, and how to do it. These two key limitations and qualifications provide contextual boundaries. Nevertheless, the panel provided a rich dialogue on ATC modernization, and the panelists developed strong messages in responding to each of the three questions. Those messages are highlighted below. The panelists attributed many of the ATC modernization program’s chronic problems to cultural and technical factors. In particular, they cited resistance to change at all levels within the agency and insufficient technical expertise as key factors impeding modernization. They identified multiple, currently available options for addressing these factors. very resistant” to having private organizations, rather than FAA, develop new procedures and systems for FAA to approve and institute. Several panelists saw resistance to change as a consequence of federal employment—of the security that comes from having a regular paycheck, cost-of-living pay increases, and protections against layoffs. A government organization is insulated from the economic pressures that the private sector faces, one of the panelists indicated. In his view, federal employees do not have the firsthand experience with layoffs and business failures to understand, as private aviation industry employees do, why improvements to the ATC system’s efficiency are needed to help revitalize the struggling aviation industry. Other panelists emphasized the reluctance of management to change. According to a panelist with experience in restructuring a foreign air traffic organization, the senior and middle managers could not or would not adjust to the change and had to be let go within the first 2 years. The other employees also had difficulty adjusting and were still adjusting in some respects, he said, but getting management on the right page was the real challenge. Another panelist emphasized that cultural change starts at the top and questioned why the ATO’s new COO had, according to the panelist’s count, replaced only two top managers in the ATO and simply reassigned other managers. Still another panelist suggested that cultural change within the ATO alone would not be sufficient to ensure the ATO’s success, because so much of the ATO’s fate depends on other organizations, including FAA, DOT, the Office of Management and Budget (OMB), and Congress. A number of panelists described the air traffic controllers’ union as also resistant to change. According to one panelist, for example, the union delayed the adoption of technologies such as the User Request Evaluation Tool (URET) because some controllers saw them as a threat to its membership. Another panelist cited the union’s long-term opposition to the implementation of a software program that tracks productivity—a key measure for a performance-based organization. The union is “very political,” several panelists asserted, and one panelist charged that it was “hindering the progress” of a performance-based organization. Resistance to change can be an issue outside FAA as well as within it, panelists noted. For example, one panelist questioned how much support the ATO was getting from DOT, OMB, and congressional committees for changing “some extremely entrenched political fiefdoms.” Another panelist said that he had found the congressional authorizing committees amenable to changes, but the appropriators liked things the way they were. what is expected of them, how they fit into the strategy, and what the vision is for their organization. In addition to having a vision, another panelist said, it is important for the ATO to tie that vision to the user constituency, not confine it to the agency. FAA cannot do everything alone from the inside, because airplanes and airports, for example, need to be equipped with the technologies that will help realize the vision. Employing a team concept could help overcome resistance to the implementation of new technologies, according to another panelist. Putting engineers, finance people, controllers, and electronic technologists together, all on the same team, he said, could unite them as they moved through the stages of implementation. Therefore, when the time comes to field a technology, the focus would be on getting it up and running and operating safely—not, the panelist implied, on obstructing its implementation because it might threaten jobs. A change in management’s approach could go a long way toward overcoming controllers’ and other employees’ resistance to change, one panelist noted. One foreign air traffic organization changed its whole approach to the unions and the staff, started talking to them as people, and began executing “participative working” programs, according to the panelist. Union representatives and managers take the same courses together and address issues of affordability together, he said, and, as a result, controllers’ pay has increased, costs have dropped, and productivity has risen. The key to these positive results, he said, is psychological change—managers have stopped seeing employees as a problem and have started to see them as part of the solution. According to other panelists, however, people find it very difficult to change, and the only way to bring about a cultural transformation is to replace those who resist change, either by allowing them to retire or by hiring others to take their places. In the corporate world, one panelist observed, a new executive brings in a new management team to support a cultural turnaround. The new team is then loyal to the new executive. In the view of this panelist, the COO’s hiring of only two new managers and reassignment of other managers would not be sufficient to turn the ATO’s culture around. Another panelist further noted that an executive in the private sector replaced the top 200 people in his organization to achieve the transformation he was seeking. Technical as well as cultural factors have impeded ATC modernization, according to several of the panelists. In the words of one speaker, FAA does not have “the engineering technical capability to deal with an extremely complex, highly nonlinear adaptive system that's got technical safety risk as a key technical parameter.” According to another panelist, FAA does not apply rigorous systems engineering expertise early in nonadvocate technical reviews of project proposals to scrub them for potential issues. As a result, a number of FAA’s programs—including complex ones such as the Wide Area Augmentation System (WAAS), as well as more “straightforward” ones such as the Standard Terminal Automation Replacement System (STARS) and the Next-Generation Air-to-Ground Communications System (NEXCOM)—had fundamental system engineering technical issues that were not identified early in the program. The risks were not mitigated, and the programs experienced significant cost growth and schedule increases. “The system engineering organization in FAA is nothing more than a process organization,” another panelist said. “The power resides with the program manager. It doesn’t matter what the systems engineering people do, their job is to keep doing plans and processes. They think that meetings are products.” good job, but who certainly have a different motivation from FAA. As this panelist put it, FAA lacks a rudder, in a technical sense, for modernization. To help address its lack of technical expertise, panelists suggested, the ATO could obtain advice from an independent board or information from other countries on technologies that they have already adopted. The panelists proposed some immediate steps that the ATO could take to address this deficiency, including the following: A technical advisory board made up of system engineers could review proposals for FAA and demand the kinds of data and tests needed to scrub the proposals and identify any big roadblocks. Hiring skilled engineers instead of relying on contractors might enable the ATO to develop systems more economically and efficiently, one panelist suggested. This panelist described how a foreign air traffic services organization develops new ATC systems in-house and seldom uses contractors. It now utilizes its engineers to build systems rather than manage contractors. As a result, he said, it is now developing the systems it needs faster and at less cost. Maximizing the use of commercial inputs was the recommendation of another panelist, who said that FAA needs to get out of the business of designing systems. According to him, most companies no longer develop their own large, complex systems; instead, they get other people to do that for them in the private sector. Another panelist also emphasized the availability of technical expertise in the private sector. However, according to a third panelist, commercial systems have a shorter economic service life than the systems that FAA designs. The ATO could profitably take advantage of the experiences of other countries’ air traffic organizations, which are technically as good as FAA ever was or ever will be, one panelist said. He maintained that the ATO should institute “a fundamental requirement and a cultural expectation” that it will review existing technologies before it buys or tries to develop its own. With a multibillion-dollar budget for software and other information technology, he said, the ATO has ample opportunity to save money. In his opening remarks and in responding to panelists’ questions, the ATO’s COO made a number of observations on FAA’s culture. He also noted that FAA plans to train or hire people with needed skills to address shortfalls in technical expertise. The following summarizes some of his key observations on FAA’s culture and provides additional information from previous GAO reports and work in progress on how FAA is addressing some of the cultural and technical factors panelists identified as affecting ATC modernization: Recognizing that cultural factors can play a critical role in an organization’s success, the ATO has initiated organizational changes that are designed to create a foundation for cultural change and deliver benefits to customers efficiently. For example, the ATO established collaborative teams of technical experts and ATC system reorganized air traffic services and the research and acquisition organization along functional lines of business to bring stakeholders together and integrate goals, as well as reward cooperation by linking investments to operations; reduced layers of management from 11 to 7 to help address the hierarchical nature of the organization; and conducted an organizationwide activity value analysis to determine the full range of activities that ATO headquarters is engaged in, the value customers place on those activities, and the potential for conducting any of those activities more effectively and efficiently. indicating a desire to improve the flow of information within the agency by sending a large number of detailed e-mails in response to a call for recommendations to improve internal communications. In the past, according to the ATO’s COO, FAA’s management culture was “intensely hierarchical, risk averse,” and “reactionary.” But now, he said, FAA is attempting to foster “results-focused, proactive and innovative behavior.” Changing the agency’s leadership model is also designed, he said, to replace a “personality-driven culture” with a sustainable, stable, viable organization that can make rational decisions that transcend changes in political leadership. The ATO is trying to better align FAA’s priorities and stakeholders’ interests by developing a strategy map that captures the outputs desired by the ATO’s owners and customers, along with the outputs that must be achieved. Called the Strategic Management Process, this effort borrows heavily from a private-sector model and uses the ATO’s strategic goals and objectives to drive investment decisions. According to FAA, the strategy map will enable owners and customers to clearly understand both the services that the ATO is providing and the effects of products in development on those services. As a result, FAA says, future budgetary conversations will revolve around the desired level of service, instead of focusing on a product, as past discussions typically did. According to FAA, the Strategic Management Process will ensure linkage between FAA’s operating and capital budgets. To become a “performance-based organization” and identify customer groups and their service needs, the ATO created “value-based” performance metrics; that is, it defined its performance in terms of customers’ needs and connected efforts to satisfy those needs with cost. Ultimately, the ATO wants to know how much every unit of output costs so that it can allocate and compare costs and measure productivity. Thus, each organizational unit and facility is developing applicable metrics for performance so that the ATO can compare costs, identify factors that affect costs, and use this information to improve performance. For example, each en route facility is determining its hourly cost to control flights. The ATO can then compare and analyze these costs to identify positive and negative factors affecting performance and productivity. plan is a response to a congressional mandate, based on a recommendation we made in 2002, that FAA develop a plan for addressing an impending wave of controller retirements and deal with productivity issues. The panelists identified and discussed the impact of funding constraints and the federal budget process on ATC modernization. In their view, the most immediate issue is a critical shortage of funds to meet the current modernization program’s plans and users’ demands. Additionally, they said, the federal budget process is slow, inflexible, and influenced by the political process; annual appropriations are uncertain and discourage planning; and the budget fails to show investment priorities and relationships between FAA’s capital and operating budgets. The panelists suggested a number of steps that the ATO could currently take to address these challenges. Panelists viewed the ATO’s apparent fiscal shortfall—which one panelist said would amount to a 20 percent deficit in 4 years—as a severe challenge. In terms of operations, the panelist said, this deficit was more likely to have a gradual than an immediately catastrophic onset. He did not expect to see major system outages but predicted, instead, “a slow but sure increase in delays.” However, as another panelist said, if the ATO did not carefully analyze demand and determine how that demand could be served, the ATO would find itself facing what a third panelist referred to as a “perfect storm,” reiterating a term the ATO itself has used. Severe reductions in the funding for ATC modernization, if required to address the currently projected shortfall, could exacerbate what one panelist described as the government’s traditional underfunding of the ATC system’s capital requirements. According to this panelist, the government undercapitalizes any complex, rapidly evolving operational system, including the ATC system, and overestimates the economic service lives of information technology investments. Whereas the government typically assumes such investments will last for 15 years, he said, a 7-year estimate would be more reasonable. showing large deficits. They suggested, for example, that FAA and DOT officials might be unwilling to publicly release data that could raise questions about their management. Several panelists maintained that the federal budget cycle is too long and inflexible to meet the needs of an ATC system. According to one panelist, it is “impossible” to run the U.S. ATC system within the classic federal structure. Such a “dramatic,” “dynamic” system requires “more managerial freedom, much more day-to-day, week-to-week, month-to-month decision- making,” he said. The federal budget process freezes plans for the system 12 or 18 months in advance, but for an ATC system to succeed, “you’ve got to be 12 or 18 days ahead.” The budget procedure requiring that capital investments be funded out of annual appropriations means that major acquisitions generally take many years to implement and projects may continue to be implemented even after they have outlived their usefulness. Particularly when annual appropriations fall short, panelists noted, projects’ development and deployment may be delayed and their costs may increase with time. Furthermore, until the acquisitions are completed, the benefits of the new technologies are deferred, aging equipment may pose risks to users, and outdated software may require costly upgrades. By the time the acquisitions are fully deployed, panelists said, they may be out of date. technology as New York, despite resource constraints and major differences in air traffic demand. The political process influences budget decisions in the administration as well as in Congress, some panelists said. According to one panelist, Congress has generally supported FAA’s modernization program, but funding difficulties have ensued because the budget is consolidated and there are always pressures on it. Other panelists added that the ATO would have difficulty “deliver the bad news”—that is, publishing a business plan that projects deficit scenarios—unless revenue increases are forthcoming. According to this panelist, OMB would deny any requests for increased funding and would, instead, tell the ATO to find another way of doing business. Panelists noted that funding from annual appropriations is uncertain, and that this uncertainty is incompatible with strategic and capital planning. The amount of money available for appropriation each year cannot be predetermined, one panelist said, and the size of the appropriation may vary from year to year. This uncertainty focuses attention on which technology will receive the funding (the inputs) rather than on what improvements in safety or capacity the technology is supposed to deliver (the outputs), he said. In debating whether this investment or that investment should receive funding, planners have lost sight of the big picture, he suggested, and the ATO has spent most of its capital investment dollars on sustaining and maintaining existing systems. Only about 14 percent of its expenditures, he recalled, were for flight enhancement. “Who anywhere would have a capital investment plan that was predominantly about standing still?” he asked. Another panelist also considered the federal budget process incompatible with strategic planning. In his words, “it is absolutely a problem at FAA” that “budget drives strategy and strategy does not drive budget.” Although FAA is good at forecasting demand, he said, it does not evaluate “the anatomy of demand” and determine how that demand will be served. Panelists noted, for example, that the number of regional jets, low-fare airlines, and unmanned aerial vehicles are increasing, but FAA has not developed a business model or plans for managing the increased air traffic. Congress,” one panelist said, rather than integrated, organized, and periodically revised. Another panelist observed that FAA asks for more than it can get and then carries the difference over from year to year, creating “a bow wave” of unfunded requests for capital projects that it seldom reduces. Furthermore, as a third panelist pointed out, the budget process establishes incentives for unrealistic planning: Project managers first overpromise capabilities and underestimate costs to increase the chances that new projects will be accepted. Then, after projects are accepted, they overestimate costs because they assume their requests will be cut. Although managers could include options in their budget submissions to indicate what could be accomplished at different funding levels, they do not do so because they assume items identified as options will be cut. Finally, managers are reluctant to revise ongoing projects because they do not want to be seen as fickle. By contrast, another panelist said, a private company that operates under a board of directors and obtains revenue from customers does not have incentives to play budget games to get projects approved. “Your money is your own money,” he said. Some panelists criticized the federal budget for failing to show priorities and relationships among proposed investments. In the budget, one panelist said, “everything is as important as everything else.” Another panelist observed that the budget sets no capital investment priorities. According to a third panelist, a line item budget tears apart a highly layered, interdependent system and does not reveal synergies between projects. Then, when the budget request goes to Congress, he said, “you have no opportunity to try to explain to anybody the interconnections of these programs.” As a result, when the appropriators decide not to fund a project, they may not understand how their decision will affect other projects. firewall discourages analyses of life-cycle costs and may lead, in some instances, to investments in technologies that end up in a warehouse because the ATO cannot afford to operate them. Similarly, another panelist observed that the separation of capital and operating costs in FAA’s accounting system makes it difficult to see the implications of capital investment decisions for operating costs, even though “everything we put in the field winds up increasing the ops budget.” Furthermore, as another panelist noted, the firewall makes it difficult to see the relationship between software replacement (capital) and maintenance (operating) costs. Thus, decisions to postpone purchases of new or upgraded software may save capital investment costs, but rising maintenance requirements may increase operating costs. Eventually, he said, the maintenance costs may “far exceed” the replacement costs. Finally, other panelists said, the budget is not integrated to show what investments buy in terms of productivity, safety, or environmental benefits, and FAA’s capital budget fails to show the impact of investments on the country. This can lead to mismatches—that is, to funding projects that will provide limited benefits for users. While recognizing the magnitude of the ATO’s projected funding shortfall over the next few years, the panelists identified a number of steps that the ATO could take to address its current financial situation. These steps included accepting the budget process as it is and reducing spending to match revenues, developing strategies for presenting the ATO’s budget request more clearly to Congress, implementing regulatory and procedural changes to allow the use of existing cost-saving technologies, contracting with the private sector to provide certain air traffic services, and obtaining information on other countries’ ATC technologies and on international technical standards. realistically be funded and to review and cut its programs in light of the current budget constraints. This panelist also recommended looking at longer term alternatives to annual appropriations that are available within the government and work well for other organizations, such as “working capital accounts and all kinds of industrial funding schemes.” Another panelist encouraged the ATO to focus its capital investment on avoiding outages—that is, on replacing equipment that would otherwise fail. This panelist also said that FAA needs a customer-oriented business strategy and a business plan. One panelist, who observed that operating costs account for about three-quarters of the ATO’s total costs, suggested that the upcoming wave of air traffic controller retirements would create “an opportunity to redistribute and even to trim the work force in some areas,” as well as reduce personnel costs by offering incentives for early retirement. Improving controllers’ productivity would be another way to save money, a fourth panelist said, but he characterized his suggestion as “touch the third rail of aviation politics.” Another panelist emphasized the importance of starting to plan now to accommodate the airplanes that are being bought today to provide service for the next generation, which he variously estimated at 20, 30, or 40 years. For example, one panelist said that the ATO needed to understand the interconnections between ATC systems and break the big picture into nuggets so that it could clarify for the appropriators why they should not break apart the ATO’s capital investment plan and selectively fund only some components. Another panelist maintained that the ATO could mitigate the effects of the firewall between its capital and operating budgets by modifying its budget submissions to show the future cost implications of current investment decisions. Several panelists identified options outside the budget process that the ATO could pursue under its current authorities. They said, for example, that the ATO could pursue procedural and regulatory changes that would take advantage of existing technologies to increase capacity, pilot test contracts with the private sector to provide certain air traffic services, and obtain information on technologies and procedures developed in other countries that could be used in the United States. Regulatory and Procedural Changes Could Allow the Use of Existing Technologies to Enhance Capacity and Efficiency Several panelists discussed the potential benefits of a more widespread use of a concept called area navigation (RNAV), which allows operators of properly equipped aircraft to use onboard navigation capabilities to fly desired flight paths without requiring direct flight over ground-based navigation aids. This provides for more direct routing, avoiding suboptimal routes prescribed by conventional “highways in the sky” that are defined by point-to-point flying over ground-based navigation aids. The RNAV concept and a major new method for exploiting it, called required navigation performance (RNP), permit flight in any airspace as long as aircraft have been certified to meet the required accuracy level for navigation performance. RNAV and RNP hold promise for saving system users time and money—largely by reducing flight times and fuel consumption by allowing users to fly shorter routes or avoid bad weather. In addition, RNAV and RNP could potentially increase the capacity of the ATC system to handle air traffic by reducing the required distance (separation) between aircraft equipped with advanced navigation capabilities if the aircraft can safely operate closer to one another than FAA’s regulations currently allow. able to use these capabilities to their full potential in the United States because FAA has not approved procedures for its use. However, the airlines are “crying for” FAA to approve RNP, one of the panelists said, because aircraft equipped with RNP capabilities could then fly alternative rolling, moving routes to avoid weather delays. Service would improve for travelers, and the airlines would avoid the substantial costs of delays, he said. Implementing RNP could also eventually lower the ATO’s costs, another panelist said, since RNP does not require any ground equipment. RNP technologies have been installed on larger aircraft for so long that some aircraft equipped with the technologies have already been retired to the desert, one panelist said. In addition, pilots have been trained to use the technologies, and the technologies are already being used in some other countries, including Canada, where a private airline company (West Jet) developed implementation procedures in collaboration with the Canadian ATC regulatory agency and the Canadian air traffic management organization. As a first step toward obtaining FAA’s approval of procedures for using RNP, a panelist said, the ATO could make policy announcements to set a tone and direction. These announcements would enlist the user community’s support at little or no cost to the ATO, give the ATO an early success, and help tie customers to the ATO’s mission. However, he also cautioned, it would be important for FAA to implement RNP in a way that did not “disenfranchise” general aviation interests and regional carriers whose aircraft are not already equipped with RNP technologies. Two panelists expressed concerns about the government’s approach to regulating the use of onboard navigation equipment and the associated procedures needed to implement RNP. According to one of these panelists, FAA has “the wrong conceptual framework” for developing regulations to implement new procedures. Its current approach is disproportionate, he said, because it establishes the same safety standards for aircraft of all sizes. “We can’t keep treating airplanes that need 100 cubic miles of airspace the same from a cost and benefit point of view as airplanes that need a quarter cubic mile of airspace,” he said. In his view, FAA needs to revise its approach to assessing and balancing risks. He maintained that the role of regulatory management on the evolution of the ATC system has been underestimated and called for significant investment in understanding risk management. The other panelist who expressed concerns about the government’s regulatory approach argued that navigational technology is evolving and shifting from ground-based to cockpit-based systems. He maintained that “you’ve got to get aircraft closer and closer together to be able to increase capacity,” and said that the government should allow the ATO to change its policies on aircraft separation to permit “the technology that exists on airplanes today to do the job.” He suggested that the private sector could assume the cost of capitalizing the equipment, but “the government’s got to allow that technology to be used, and it hasn’t.” Although one panelist emphasized the importance of conducting thorough technical evaluations of RNP to identify any roadblocks to its use, the panelists generally considered it a highly promising, low-cost option for the ATO to improve service. One panelist recommended that the ATO create incentives, such as the right to fly in preferred airspace, for users that equip their aircraft with RNP technologies, to lower the ATO’s costs. determined in its favor. The panelists, who generally assumed that the private sector could provide flight service station services and other air traffic services more efficiently than the government, suggested that if contracting for flight service station services proved to be effective, FAA could contract for other air traffic services, such as oceanic, night, en route, or airways facilities services. The A-76 process would then serve not only as a way of saving money but also as “a pilot program for how things could get done,” one panelist said. In the view of another panelist, ongoing government oversight would ensure the safety of contracted operations, and “staged outsourcing” of the NAS’s functions might build confidence in the private sector’s ability to provide air traffic services safely and efficiently. Obtaining Information on Other Countries’ ATC Technologies and on International Technical Standards Could Help the ATO Save Costs Obtaining information on technologies and procedures that other countries have already developed could help the ATO control costs, as well as help compensate for its lack of technical expertise, panelists noted. “We should be using and sharing” the technologies that have already been invented, one panelist said. According to his organization, the air navigation service business worldwide spends $3 billion to $4 billion a year on writing code for air traffic management software, and “at least half of that” is writing code for “something that’s already been invented and…works just fine somewhere else.” on the systems that are already running in countries, their performance, and their cost. Sharing information on technical standards with international organizations could also help the ATO avoid costly investments in technologies whose standards were incompatible with those of other countries. A shared vision is crucial for a globally based air traffic system, one panelist said. If every country or continent had its own technical standards—a North American switch, a European switch, a South American switch, and an Australian switch, for example—an international system could not function effectively. The following provides additional information from the ATO’s COO and from previous GAO reports and work in progress on how FAA is addressing some of the funding shortfalls and features of the federal budget process that panelists identified as affecting ATC modernization: The ATO’s COO believes that good financial management means linking FAA’s capital and operating budgets. Previously, FAA developed separate capital (Facilities and Equipment) and operating (Operations) budgets. But the ATO recognizes that capital expenditures directly affect operating costs over time, and therefore the two budgets must be developed together. Creating this linkage is important for the ATO to respond to concerns expressed by its owners and customers as well as to address internal issues, such as training, staffing, pay disparities, and infrastructure. Using the Strategic Management Process to drive budget decisions will help to ensure the establishment and maintenance of a linkage between the capital and operating budgets. financial management systems it has been putting in place. As steps toward that goal, the ATO expects everyone to learn the difference between cost and cash flow and get a better handle on unit costs as better cost accounting data become available. To gain a more complete understanding of its costs, FAA is revising its cost accounting practices and changing from a cash flow to a total cost business model for the ATO, and the ATO is developing management training in cost accounting and budgeting. Moreover, FAA plans to finish putting a new cost accounting system in place by 2006 that will allow it to assign, track, and better control costs. In the fall of 2004, FAA updated its cost estimates in light of OMB’s revenue projections for the next 4 years and arrived at a cumulative shortfall for the period of $5 billion for the operating budget and $3.2 billion for the capital budget. According to FAA, a business plan that the ATO was preparing at that time will show, when completed, how large a funding gap the ATO faces and how far it will have to go to address that gap. Whatever the exact size of the gap may be, FAA says that it is prepared to identify and eliminate redundancies in the NAS and to review its long-term ATC modernization priorities. FAA has already taken some steps to control the costs of ATC modernization. For example, it has adopted the phased approach to implementing new ATC systems that it used under Free Flight Phase 1, called “build a little, test a little.” This approach relies on the early and ongoing investment of stakeholders, who review the progress of new projects regularly and identify critical omissions and “no go” items that would prevent a system from operating as intended. Reviews of three projects with cost, schedule, and performance issues that our reports had identified—the Local Area Augmentation System, Controller-Pilot Data Link Communications, and Next-Generation Air-to-Ground Communications System—led FAA to reduce the funding for them in FAA’s fiscal year 2005 budget request. The ATO says it plans to continue this phased approach to acquiring new systems. customers that do equip. Now, during the first phase, FAA is implementing the redesign at very high altitudes. In January 2005, FAA doubled the airspace routes between 29,000 feet and 41,000 feet by spacing aircraft 1,000 feet apart instead of 2,000 feet. The procedure, invisible to passengers, is called Reduced Vertical Separation Minimum and is expected to save airlines $400 million in fuel costs during the first year. As technology allows, FAA says, more flight altitude levels will be added. Currently, FAA is implementing a number of improvements to airspace and procedures using RNP. In addition, according to FAA, five airports are developing RNP-based procedures in partnership with airlines that favor RNP. While recognizing that the ATO could make some progress in addressing its cultural, technical, and budgetary challenges under its current authorities, the panelists generally agreed that structural changes would increase the ATO’s chances of success. These changes, which would give the ATO a more predictable source of funding and greater decision-making authority, would generally require legislative action and take time to implement. To give the ATO a more predictable source of funding, panelists suggested that it be authorized to establish and manage user fees, rather than rely on appropriated tax receipts, and that it be allowed to issue revenue bonds backed by these fees. To give the ATO greater decision-making authority, panelists proposed restructuring it to streamline and strengthen its management and provide its managers with the tools needed to address its challenges. These changes would allow the ATO to implement a “sensible” capital investment program; hire the technical expertise it needs; achieve cost efficiencies; and offer better, more responsive service. Additionally, panelists said, restructuring could resolve the conflict of interest inherent in FAA’s dual responsibility as the regulator and the operator of air traffic services. When Congress authorized the ATO’s creation and generally implemented the Mineta Commission’s organizational recommendations without implementing its funding recommendations, it produced an anomaly—that is, an organization charged with becoming performance-based but deprived of the means to transform itself, according to one panelist. Other panelists also portrayed the ATO as an organization that is charged with operating like a business but is not provided with the management tools available to a business. In their view, the ATO’s chances for success are limited because the COO is being asked to turn the organization around without being given the tools to do so. One panelist, who said he was skeptical about the ATO’s ability to act like a business when it is not really one, suggested that it was only at the margins that the creators of the ATO had replicated a business. According to him, the ATO is still largely a government organization and therefore remains subject to most governmental constraints. Replacing airline ticket taxes with a user fee and allowing the ATO, rather than Congress, to manage the collected fees is a step that many panelists considered essential for the ATO’s success. While recognizing that such a fee would be controversial, since the costs for most users would likely increase, the panelists maintained that it would produce a more predictable, reliable funding stream than the annual appropriations process. and helps preclude spending for “gold-plated things that don't affect the true performance of the system and drive the costs up completely unnecessarily.” Without a direct connection to the users and their mission, another panelist said, “evolution takes very unintended and very undesirable paths over long periods of time.” As long as the customers are not directly paying the bills and providing the resources, still another panelist maintained, “it’s going to be very hard to bring about real change” and make the ATO “a customer-driven, customer-servicing organization. The ones who pay the bills are the ones you respond to and serve,” he concluded. While panelists generally favored a user fee system, they cautioned care in proposing and implementing one. As one panelist said, the fee question, once raised, would be all-consuming and would require the expenditure of political capital. In his view, it was critical that the ATO wait to achieve some successes before seeking a user fee system. Another panelist called for figuring out “not only what problem we’re solving, but what problems we might be likely to create,” and noted that the government would have to consider what it was incentivizing through user fees. For example, if the fee was based on weight, he said, it might “incentivize even smaller planes and more planes,” thereby increasing demands on the ATC system’s capacity. Another issue that would have to be worked out, is how the common costs of air traffic services (e.g., the costs of activities in the ATC system operated by the Department of the Air Force) should be allocated— whether users should pay only for the incremental costs of the services they use, as most users would argue, or whether some cross-subsidies should continue. Another panelist pointed out that implementing a user fee alone would not guarantee efficiency, because the air traffic services provider could simply raise the fee when costs increased and the users would have to pay, since the service is a monopoly. Some method of controlling costs would have to be built into the system, he said. Most panelists correctly assumed that legislation would be required to institute a user fee system. Specifically, a user fee system could be implemented in a government or a public-private type of air traffic services organization. However, one panelist cautioned, it would be “fatal” to implement the fee in any way that did not make the ATO financially independent of Congress. Once the airlines and general aviation users started to pay a fee to finance the ATO, then the ATO should be held accountable to them, he said, and “FAA should not be getting approval from government to spend its budget.” Revenue bonding based on a new user fee stream would create an “alternative to capital starvation,” one panelist said. Even if the user fee stream initially produced no more revenue than the airlines are now paying in aviation-related taxes, he said, the ATO could reap a “transition dividend” during the first 5 or 10 years after the bonds are issued, limiting its annual outlays to the debt service on the bonds. To facilitate the airlines’ recovery, he suggested, the ATO could cut what the airlines pay and “still have a robust modernization program being financed by the revenue bonds.” He characterized this strategy as “money that’s lying on the sidewalk waiting to be picked up” and saw it as an opportunity to buy some new equipment in bulk and get it installed before it becomes obsolete. Such a “sensible” approach would not be possible with annual appropriations, he said. Panelists maintained that the ATO’s organizational placement, combined with its dependence on Congress for funding, limits the COO’s ability to make decisions and take actions. The COO is not a Chief Executive Officer, as one of the panelists observed. Instead, he reports to his “owners”—who include the FAA Administrator and the DOT Secretary, who in turn receive direction from the administration (the President and OMB Director) and Congress. oversight authority, making the subcommittee purely advisory. Consequently, he said, there is no oversight group that is expected to provide constructive criticism of FAA, and FAA does not get “the kind of constructive advice that you might hope for.” According to a third panelist, Europe’s Performance Review Commission provides such constructive advice for EUROCONTROL, the European air traffic management organization. The commission serves as a panel of independent advisers and costs about $2.5 million a year, he said, and “it’s well worth the investment.” According to several panelists, the ATO’s COO lacks the management tools that would be available to a private-sector CEO. His ability to plan modernization projects, set program priorities, and implement new technologies is constrained because the FAA Administrator, DOT Secretary, and OMB Director can revise his budget request and Congress can make further changes in the ATO’s budget. In addition, the 20-year vision of the Joint Planning and Development Office (JPDO) is at odds with the ATO, according to one panelist, because it looks forward to the ATC system of 2025, rather than helping the ATO address its immediate funding needs. Other panelists observed that the controllers’ union influences management’s decisions. The COO lacks key financial data needed to determine, analyze, and manage the ATO’s costs. When he was “parachuted” into the ATO, as one panelist put it, he did not have the numbers he needed to know where the ATO stood because FAA did not maintain basic information on the costs and value of existing systems, reducing the ATO’s potential to be data driven. As a result, he spent most of his first year overseeing the implementation of a cost accounting system and collecting other key data. their performance is constrained because their terms of employment and compensation are based largely on negotiated agreements rather than on performance. In addition, salary caps limit FAA’s ability to pay for technical expertise. As one panelist observed at the end of the panel, the ATO’s creation did not address the structural conflict of interest that exists because FAA is both the regulator and the operator of air traffic services. “We didn’t have arms length regulation of air traffic control in FAA,” he said, “and the ATO didn’t do anything to accomplish that.” Another panelist noted that when his country restructured its air traffic organization, it immediately eliminated the same structural conflict of interest, and “overnight” the regulator became more effective and the operator’s safety performance “significantly improved.” According to the first panelist, other countries that have reorganized their air traffic organizations have also instituted arms’ length regulation if they did not have it already. “We remain one of the few places that somehow thinks that self-regulation is a good idea, in spite of sort of overwhelming evidence in lots of arenas that it's not a very good idea,” he said. The following is additional information from the ATO’s COO and from previous GAO reports and work in progress that indicates how FAA is addressing some of the structural changes that panelists proposed to improve the ATO’s success over time: In addition to the business plan that the ATO is developing to guide and improve its operations and financial management, FAA has worked to develop three longer term planning documents. First, it has published its Flight Plan for 2005 through 2009, a multiyear strategic effort that sets a 5-year course for FAA in the areas of safety, capacity, international leadership, and organizational excellence. Second, it has developed a rolling 10-year effort, called the Operational Evolution Plan (OEP), through which FAA plans to increase the capacity of the NAS by one- third. Finally, FAA is participating in a multiagency effort, sponsored by the JPDO, to develop a national plan for aviation in 2025 and beyond. Both the OEP and the JPDO’s plan are designed to meet the Flight Plan’s commitment to help the NAS flow smoothly and meet future needs. According to FAA, the Vice President of the Operations Planning Service Unit in the ATO is also the Director of the JPDO, helping to ensure integration of near-term and long-term planning. According to the ATO’s COO, the restructuring of U.S. air traffic services that has taken place thus far, through the establishment of a performance- based air traffic organization, constitutes “the first building block” of the longer term effort to transform the aviation system envisioned in the JPDO’s 20-year plan. According to the COO, this vision of the U.S. aviation system will incorporate both technologies and processes. However, he acknowledged that the ATO has not yet connected this long- term vision with the financial and other challenges it currently faces. He said that his goal is to establish an organization that can execute the long-term vision and manage not only its finances but also its future— an organization that can, in effect, ensure the viability of the long-term vision. Over time, he said, he plans to expand the OEP to include a strategy and the JPDO’s long-term vision, thereby “tie the vision to the viability of the future.” The OEP will then be “not just a set of projects,” but a project plan with a vision and a strategy that goes out 20 years. But given the current budget constraints, he conceded, the path to that goal is not clear. In March 2004, FAA created the Air Traffic Safety Oversight Service (AOV), under FAA’s Office of Aviation Safety. This step established separate reporting relationships for the ATO, which is responsible for managing the ATC system, and for the AOV, which is responsible for ensuring the safety of changes to air traffic standards and procedures. The establishment of the AOV responds directly to a recommendation by the 1997 National Civil Aviation Review Commission that safety oversight of FAA’s traffic function be provided by a separate part of the agency. Although both organizations remain within FAA, under the FAA Administrator, they are less closely joined than they were previously. Hence, this step is a positive move toward providing “arm’s length” safety oversight, although it does not go as far as placing the two organizations in separate federal agencies or removing one of the agencies from the federal government altogether. At our request, the panelists concluded the panel with their parting thoughts on the day’s discussion, including any advice they had for FAA or for Congress. Overall, the panelists were united in their desire for the ATO to succeed, but they generally agreed that its opportunities for success were constrained within a government system. For many, the steps taken thus far to create a performance-based organization were insufficient, in large part because the ATO lacks control over its revenues and funding priorities, and the ATO still had a long way to go to achieve its goals. Some panelists stressed the importance of progressing by small steps within the existing system, at least for the time being. Such small steps might include obtaining good performance and cost information, scoping programs in accordance with current budget projections, contracting out some air traffic services, and obtaining outside expertise from systems engineers and other technical and management experts. It was critical, one panelist said, for the ATO to “have some small early practical successes” to enlist the political support of the user community and help tie the customers to the ATO’s mission. Other panelists focused on the obstacles within the system that they believed would impede or prevent success. Among the obstacles they cited were the counterproductive incentives inherent in the budget process, the government’s refusal to allow new air traffic technologies to be used, and opposition to organizational and technological change. It was important, one panelist said, to overcome this opposition by describing “the difference between how things are and how they might be.” Descriptions of accomplishments elsewhere, together with actions to implement whatever safeguards and regulatory framework might be necessary, could perhaps make the argument for change “compelling,” he said. Still other panelists looked to the future, calling for international technical benchmarks to promote efficient development, business models that take into account operational trends (e.g., the growing market share of regional jets and low-fare airlines) and incentives to help users overcome cost barriers to acquiring new technologies. As one panelist said, “we have to target the future mix of real operations that we’re really going to see, not build the world’s most perfect system from 1956.” Despite their reservations about the ATO’s potential for success as a government organization, the panelists generally agreed that stakeholders should not “allow the concept of privatization to be the enemy of moving forward with the ATO,” as one panelist said, or “sacrifice the good for the better” in the words of another. Instead, taking a two-pronged approach— telling people “what’s to be done now to get results” and telling them “that they have an obligation to build for the future”—would be the best way, in the view of most panelists, for the ATO to meet its immediate and longer term challenges. Clinton V. Oster, Jr. (Panel Moderator) Professor of Public and Environmental Affairs, School of Public and Environmental Affairs, Indiana University Anthony J. Broderick Independent Consultant Former FAA Associate Administrator for Regulation and Certification Steven R. Bussolari Assistant Division Head, Tactical Systems Division Manager, Air Traffic Control System Group, Lincoln Laboratory, John W. Crichton President and CEO, NAV Canada George L. Donahue Director, The Center for Air Transportation Systems Research, George Former FAA Associate Administrator for Research and Acquisition John J. Fearnsides Professor of Public Policy, George Mason University Chief Strategist and Partner of MJF Strategies Xavier Fron Head, Performance Review Commission, EUROCONTROL Richard Golaszweski Executive Vice President, Gellman Research Associates (GRA), Inc. Ian Hall Director of Operations, National Air Traffic Services, United Kingdom Thomas Imrich Chief Pilot, Research, Boeing Commercial Aircraft Satish C. Mohleji Principal Engineer, Center for Advanced Aviation System Development, The MITRE Corp. Robert W. Poole, Jr. Director of Transportation Studies, Reason Foundation Michael Powderly President, Airspace Solutions John A. Sorensen Chief Executive Officer, Seagull Technology, Inc. In addition to the individual named above, Elizabeth Eisenstadt, Brandon Haller, Bert Japikse, Maren McAvoy, Beverly Norwood, and Richard Scott made key contributions to this special product. Air Traffic Control: FAA Needs to Ensure Better Coordination When Approving Air Traffic Control Systems. GAO-05-11. Washington, D.C.: November 17, 2004. Air Traffic Control: FAA’s Acquisition Management Has Improved, but Policies and Oversight Need Strengthening to Help Ensure Results. GAO-05-23. Washington, D.C.: November 12, 2004. Air Traffic Control: System Management Capabilities Improved, but More Can Be Done to Institutionalize Improvements. GAO-04-901. Washington, D.C.: August 20, 2004. Information Technology: FAA Has Many Investment Management Capabilities in Place, but More Oversight of Operational Systems Is Needed. GAO-04-822. Washington, D.C.: August 20, 2004. Federal Aviation Administration: Plan Still Needed to Meet Challenges to Effectively Managing Air Traffic Controller Workforce. GAO-04-887T. Washington, D.C.: June 15, 2004. Air Traffic Control: FAA’s Modernization Efforts--Past, Present, and Future. GAO-04-227T. Washington, D.C.: October 30, 2003. National Airspace System: Current Efforts and Proposed Changes to Improve Performance of FAA’s Air Traffic Control System. GAO-03-542. Washington, D.C.: May 30, 2003. Human Capital Management: FAA’s Reform Effort Requires a More Strategic Approach. GAO-03-156. Washington, D.C.: February 3, 2003. National Airspace System: Better Cost Data Could Improve FAA’s Management of the Standard Terminal Automation Replacement System. GAO-03-343. Washington, D.C.: January 31, 2003. National Airspace System: Status of FAA’s Standard Terminal Automation Replacement System. GAO-02-1071. Washington, D.C.: September 17, 2002. National Airspace System: FAA’s Approach to Its New Communications System Appears Prudent, but Challenges Remain. GAO-02-710. Washington, D.C.: July 15, 2002. Air Traffic Control: FAA Needs to Better Prepare for Impending Wave of Controller Attrition. GAO-02-591. Washington, D.C.: June 14, 2002. Air Traffic Control: Role of FAA’s Modernization Program in Reducing Delays and Congestion. GAO-01-725T. Washington, D.C.: May 10, 2001. National Airspace System: Problems Plaguing the Wide Area Augmentation System and FAA’s Actions to Address Them. GAO/T-RCED-00-229. Washington, D.C.: June 29, 2000. Aviation Acquisition: A Comprehensive Strategy Is Needed for Cultural Change at FAA. GAO/RCED-96-159. Washington, D.C.: August 22, 1996. FAA Budget Policies and Practices. GAO-04-841R. Washington, D.C.: July 2, 2004. Air Traffic Control: FAA’s Modernization Efforts--Past, Present, and Future. GAO-04-227T. Washington, D.C.: October 30, 2003. National Airspace System: Current Efforts and Proposed Changes to Improve Performance of FAA’s Air Traffic Control System. GAO-03-542. Washington, D.C.: May 30, 2003. National Airspace System: Reauthorizing FAA Provides Opportunities and Options to Address Challenges. GAO-03-473T. Washington, D.C.: February 12, 2003. National Airspace System: Better Cost Data Could Improve FAA’s Management of the Standard Terminal Automation Replacement System. GAO-03-343. Washington, D.C.: January 31, 2003. National Airspace System: FAA’s Approach to Its New Communications System Appears Prudent, but Challenges Remain. GAO-02-710. Washington, D.C.: July 15, 2002. National Airspace System: Free Flight Tools Show Promise, but Implementation Challenges Remain. GAO-01-932. Washington, D.C.: August 31, 2001. Federal Aviation Administration: Challenges for Transforming Into a High-Performing Organization. GAO-04-770T. Washington, D.C.: May 18, 2004. National Airspace System: Current Efforts and Proposed Changes to Improve Performance of FAA’s Air Traffic Control System. GAO-03-542. Washington, D.C.: May 30, 2003.
In 1981, the Federal Aviation Administration (FAA) began a program to modernize the national airspace system and a primary component, the air traffic control (ATC) system. The ATC component of this program, which is designed to replace aging equipment and accommodate predicted growth in air traffic, has had difficulty for more than two decades in meeting cost, schedule, and performance targets. The performance-based Air Traffic Organization (ATO) was created in February 2004 to improve the management of the modernization effort. On October 7, 2004, GAO hosted a panel to discuss attempts to address the ATC modernization program's persistent problems. Participants discussed the factors that they believed have affected FAA's ability to acquire new ATC systems. Participants also identified steps that FAA's ATO could take in the short term to address these factors, as well as longer term steps that could be taken to improve the modernization program's chances of success and help the ATO achieve its mission. The participants included domestic and foreign aviation experts from industry, government, private think tanks, and academia. They are recognized for their expertise in aviation safety, economics, and engineering; transportation research and policy; and government and private-sector management. What Participants Said: Overall, the participants identified cultural, technical, and budgetary factors that, in their view, have affected the progress of ATC modernization. To address these factors, they proposed what one participant termed a "two-pronged" approach--simultaneously taking care of "the here and now" and building a "viable future" for the ATO. Cultural and Technical Factors Have Impeded ATC Modernization: According to participants, the key cultural factor impeding modernization has been resistance to change. Such resistance is a characteristic of FAA personnel at all levels, participants said, and management, in the experience of some, is more resistant than employees who may fear that new technologies will threaten their jobs. The key technical factor affecting modernization, participants said, has been a shortfall in the technical expertise needed to design, develop, or manage complex air traffic systems. Without the technical proficiency to "scrub" project proposals for potential problems early and to oversee the contractors who implement its modernization projects, they said, FAA has to rely on the contractors, whose interests differ from its own. Budgetary Factors Have Constrained ATC Modernization: The most immediate budgetary constraint, participants said, is the multibillion-dollar shortfall that FAA is projecting between available revenues and modernization needs over the next 4 years. Participants also identified features of the federal budget process as constraints, noting, for example, that the federal budget cycle is too long and inflexible to meet the needs of a dynamic ATC system that requires much more managerial freedom and short-term decision making. They further noted that the budget process is influenced by the political process, and that the funding for capital projects is sometimes spread out over so many years that technologies are out of date by the time they are deployed. Annual funding uncertainties discourage strategic and capital planning, they said, and the budget fails to show priorities and relationships among proposed investments. Short-term and Longer Term Changes Could Promote Success: Participants suggested that the ATO could facilitate cultural transformation by creating a vision and strategy that would unite stakeholders and by assembling project teams with different skills and interests whose members could forge common organizational interests by working together to solve common technology development problems. To help offset technical inadequacies, the participants suggested that the ATO could consult an advisory board, identify and consider purchasing needed technologies that other countries have developed, and hire more skilled engineers to provide in-house expertise. To address budgetary constraints, participants suggested, among other short-term steps, reducing spending to match revenues and developing strategies for presenting FAA's budget request more clearly to Congress. Longer term suggestions included giving the ATO the predictable funding and decision-making authority it needs to carry out a "sensible" capital investment plan.
DOD defines its logistics mission, including supply chain management, as supporting the projection and sustainment of a ready, capable force through globally responsive, operationally precise, and cost-effective joint logistics support for America’s warfighters. Supply chain management is the operation of a continuous and comprehensive logistics process, from initial customer order for materiel or services to the ultimate satisfaction of the customer’s requirements. According to DOD, its goal is to have an effective and efficient supply chain, and the department’s current improvement efforts are aimed at improving supply chain processes, synchronizing the supply chain from end to end, and adopting challenging but achievable standards for each element of the supply chain. To this end, DOD has identified the following aspects of the supply chain for ongoing attention: materiel readiness, responsiveness, reliability, planning and precision, and costs. Integral to the supply chain’s responsiveness and reliability is DOD’s global distribution pipeline, which encompasses deploying units and their equipment, such as vehicles and materiel owned by the unit and brought from the home station; delivering sustainment items, which are supplies such as food, water, construction materiel, parts, and fuel that are requisitioned by units already deployed; and executing the retrograde of repairable items to support maintenance activities. DOD policy states that all organizations in the supply chain must recognize and emphasize the importance of time in accomplishing their respective functions and be structured to be responsive to customer requirements during peacetime and war. Joint doctrine identifies distribution as a critical element of joint operations that synchronizes all elements of the logistics system to deliver the “right things” to the “right place” at the “right time” to support the geographic combatant commander.coordinate and synchronize the fulfillment of joint force requirements from the point of origin to the point of need. Accordingly, DOD mapped out the distribution pipeline to To measure the timeliness of the logistics system from the point of origin to the point of need, DOD divided the distribution pipeline into four segments—source, supplier, transporter, and theater. DOD further subdivided these four segments into a total of 12 subsegments (see fig. 1). Each subsegment accounts for a specific step—and period—in processing an order, such as container consolidation-point processing and transportation to point of debarkation. The total time expended by DOD’s distribution pipeline to fulfill the order, from the submission of the order to the receipt of the materiel ordered, is determined by combining the times of all of the subsegments. Within the theater segment of the pipeline, DOD conducts distribution from the points of need (e.g., supply support activities at a major aerial port or seaport of debarkation) to the points of employment. According to DOD, the distribution pipeline between the point of origin and the point of need is under the authority and is the oversight responsibility of TRANSCOM. Furthermore, DOD has stated that in line with internal guidance and Title 10 of the United States Code, TRANSCOM’s purview ends at the point of need, and the given geographic combatant commander in that theater is responsible for distribution between the DOD established these point of need and the point of employment.authorities and responsibilities because the point of employment is a physical location designated by the commander at the tactical level where force employment and commodity consumption occurs or where unit formations come directly into contact with enemy forces. The nominal distance between the point of need and the point of employment is also known as the “last tactical mile.” Unit equipment and sustainment items may subsequently be transported between these two points using a combination of surface and air transportation modes. Many organizations within DOD have important roles and responsibilities regarding the global distribution pipeline, and these responsibilities are spread across multiple entities, each with its separate funding and management of logistics resources and systems. For example, the Under Secretary of Defense for Acquisition, Technology and Logistics serves as the principal staff assistant and advisor to the Secretary of Defense for all matters related to defense logistics, among other duties. The Assistant Secretary of Defense for Logistics and Materiel Readiness, under the authority, direction, and control of the Under Secretary of Defense for Acquisition, Technology and Logistics, serves as the principal logistics official within the senior management of the department. Within the Office of the Assistant Secretary for Logistics and Materiel Readiness, the DASD SCI improves the integration of DOD’s supply chain through policy development and oversees the adoption of metrics. Subject to the authority, direction, and control of the Secretary of Defense, the Secretaries of the military departments are responsible for, among other things, organizing, training, and equipping their forces. Another important organization in supply chain management is DLA, which purchases and provides nearly all of the consumable items needed by the military, including a majority of the spare parts needed to maintain and ensure the readiness of weapon systems and other equipment. TRANSCOM is designated as the distribution process owner for DOD and is responsible for transporting equipment and supplies in support of military operations. The role of the distribution process owner is, among other things, to oversee the overall effectiveness, efficiency, and alignment of department-wide distribution activities, including force projection and sustainment operations. As DOD’s single manager for transportation (other than for transportation of service-unique or theater- assigned assets), TRANSCOM is responsible for providing common-user and commercial air, land, and sea transportation and terminal management. DLA maintains the Logistics Metric Analysis Reporting System (LMARS), a database and collection of reports that serve as the authoritative source of data on the performance of the logistics pipeline. The information that DLA collects and archives provides managers with the ability to track trends, identify areas requiring improvement, and compare actual performance against established goals. The information collected and archived in LMARS encompasses all orders, beginning with their submission as customer orders and ending with the receipt of the ordered materiel. DLA additionally maintains the Strategic Distribution Database, which combines supplier and transportation data for use by TRANSCOM. Every month, DLA transmits the latest data to TRANSCOM, which then incorporates data from other information systems to calculate and analyze the distribution pipeline’s performance in fulfilling all orders in a timely manner. The Office of the Assistant Secretary of Defense for Logistics and Materiel Readiness receives scheduled reports on distribution performance from DLA and TRANSCOM throughout the year. The office has a contract with the Logistics Management Institute to maintain an internal repository of received data and to complete various analyses. The Office of the DASD SCI uses this information to update the DOD Performance Management Database quarterly. This is a part of the performance budget tracking and is reported to the Office of Management and Budget, which then determines whether to report the information to Congress. DOD has established three metrics for distribution to measure the performance of its global distribution pipeline—logistics response time, customer wait time, and time-definite delivery. However, DOD’s three distribution performance metrics do not provide decision makers with a comprehensive view of performance across the entire global distribution pipeline as they do not incorporate costs, cover all the military services, or extend to the “last tactical mile.” To measure the performance of its global distribution pipeline, DOD has established three metrics—logistics response time, customer wait time, and time-definite delivery. DOD Manual 4140.01, volume 10, DOD Supply Chain Materiel Management Procedures, and DOD Instruction 5158.06, Distribution Process Owner, define the three metrics and identify the DOD organizations responsible for monitoring them, as shown in table 1. Leading practices state that achieving results in government requires a comprehensive oversight framework that includes metrics for assessing progress, consistent with the framework established in GPRA. Furthermore, DOD policy requires that all organizations in the supply chain recognize and emphasize the importance of time in accomplishing their respective functions. Accordingly, each of the three DOD metrics measures time expressed in days. All three performance metrics begin with the submission of a customer order and end with the receipt of the ordered materiel by the support supply activity that ordered it. For example, logistics response time measures the entire processing time of the customer order through each of the 12 subsegments in the distribution system, from the date the order is submitted to the date the customer posts the materiel received to the record of inventory at the supply support activity. Logistics response time is the broadest of the three metrics, and DOD has identified it as a key performance measure to monitor the effectiveness of the supply chain. In contrast, customer wait time measures the processing time for a subset of customer orders— specifically, customer orders from organizational maintenance units. If an organizational maintenance unit’s order cannot be fulfilled by the local retail supply system, the unit will then place a new request with the wholesale supply system. Similar to logistics response time, customer wait time measures the total elapsed time between the submission and the receipt of an order. Time-definite delivery measures the entire processing time of an order and determines whether the distribution system is capable of delivering an order to the customer within a given period. In general, we found that each of the three metrics is used to assess performance in terms of time, such as the maximum number of days to complete an order (customer wait time) or the likelihood that a delivery will be received within that number of days (time-definite delivery). DOD does not measure against a standard for logistics response time as, according to DOD, decision makers examine the data on logistics response time to determine whether the average number of days it takes to process orders is increasing or decreasing. DOD has established customer wait time standards for the Air Force, Army, and Navy (see table 2); however, the Marine Corps has not established a service-wide customer wait time standard, as discussed later in the report. Standards for time-definite delivery vary according to the mode of transportation used to deliver the shipments and the geographic destination. For example, DOD has set as a time-definite delivery standard that 85 percent of all items ordered from the United States for delivery to Germany by military air transport should be delivered within 18 days. Similarly, DOD has set as a time-definite delivery standard that 85 percent of all items ordered from the United States for delivery to Japan by ocean transport should be delivered within 57 days. DOD’s three distribution performance metrics do not provide decision makers with a comprehensive view of performance across the entire global distribution pipeline. According to leading practices, relying on a set of performance measures that address multiple priorities, such as timeliness, quality, and cost, and that provide useful information for decision making that helps alert managers and other stakeholders to the existence of problems can help leading organizations respond when problems arise. However, because DOD’s three metrics do not incorporate costs, cover all the military services, or extend to the “last tactical mile,” they do not provide the department with a comprehensive view of the distribution system’s performance. DOD guidance establishing the customer wait time and time-definite delivery performance measures states that organizations in the supply chain must accomplish their respective functions in an efficient and cost- effective manner. Furthermore, DOD guidance regarding supply chain materiel management states that corresponding policy should balance risk and total cost. However, DOD’s definitions of its three metrics and its guidance for using them to measure distribution performance do not address cost. Officials from the Office of the DASD SCI explained that although cost is not an element in these three metrics for assessing the performance of the distribution system—which are time-based—it is an element in other metrics, such as customer price change and logistics cost baseline, two metrics that are used to assess other aspects of the supply chain. They told us that they currently consider cost in evaluating the performance of the entire supply chain but not in evaluating distribution performance specifically. Office of the Secretary of Defense officials noted the department continually attempts to balance cost with the importance of responding to critical orders in a timely fashion. For example, Office of the Secretary of Defense officials stated that the department’s economic movement quality model minimizes total logistics costs by identifying the trade-offs among inventory, transportation, and materiel handling. However, officials from the Office of the DASD SCI and TRANSCOM stated that DOD does not collect information about cost or consider cost when compiling, analyzing, and reporting the data generated by logistics response time, customer wait time, and time- definite delivery. Officials from the Office of the DASD SCI and TRANSCOM acknowledged that cost in the context of distribution could become more important, depending on the fiscal environment. Considering cost as a part of distribution performance is also important as DOD looks to effectively manage all of its distribution operations throughout the world, especially as current wartime efforts are drawing down. As previously stated, DOD has demonstrated the ability to consider costs for evaluating aspects of the overall supply chain. Since some cost analysis is available throughout DOD, distribution performance may be able to incorporate those cost analyses related to the three distribution performance metrics. For example, according to DOD officials, reviews of distribution performance for the preceding period in terms of time-definite delivery compliance occur on a regularly scheduled basis. Similarly, DOD publicly reports the performance of the services against their customer wait time standards on an annual basis. DOD could help ensure cost is considered as part of its overall evaluation of distribution performance if it were able to identify and report the corresponding costs for distribution for the preceding period when reviewing time-definitely delivery compliance or when reporting customer wait time performance. As we found in April 2013, the federal government is facing serious long-term fiscal challenges, and DOD likely will encounter considerable budget pressures over the next decade. Further, under DOD’s Financial Management Regulation, cost information is essential to the department’s compliance with the Government Performance and Results Act (GPRA) of 1993 as cost accounting information coupled with performance measures are essential in evaluating and reporting on efficiency and effectiveness of DOD missions and functions. As of December 2014, customer wait time standards have been established for the Army, the Navy, and the Air Force, but not for the Marine Corps. The DOD guidance establishing the customer wait time performance measure requires that the military departments (e.g., the Departments of the Air Force, the Army, and the Navy) use the customer wait time measurement to assess the performance of the supply chain, but does not require that each of the military services establish a customer wait time standard to assess its distribution performance. According to officials from the Office of the DASD SCI, their office and the military services agreed on a customer wait time standard after coordinating with each other, but that the Marine Corps has “not established a goal at this time.” DOD officials explained that the Marine Corps has not established a service-wide standard because it maintains a different logistics structure than the other services because of its expeditionary mission. According to DOD officials, in the course of military operations, Marine Corps units will deploy with their requisite supplies and then become “customers” of whatever service has its distribution system available. For example, according to these officials, when the Marine Corps is deployed and is the customer of another service, only the other service’s distribution operations can be measured. However, when the Marine Corps is not deployed, it uses its own distribution system to operate and sustain units inside and outside the United States. However, this system does not have a service-wide customer wait time standard against which to measure distribution performance. Marine Corps officials explained the service has not established a single customer wait time standard at the service level but that standards exist and are applied at the operational and tactical levels. According to Marine Corps Order 4400.16H, current DOD time-definite delivery standards serve as the basis for customer wait time standards at the operational and tactical levels. However, these operational and tactical standards apply only at the level of specific Marine Corps units, not service-wide, and they are not reported as a single customer wait time metric for overall distribution performance as is done for the other three services. Having such a service-wide customer wait time standard for the Marine Corps that covers its distribution system would help ensure that DOD has complete visibility over distribution performance across the four services. Moreover, unless DOD’s guidance is revised to help ensure the three distribution performance metrics address multiple priorities and provide useful information for decision making on matters such as cost and unless a service-wide customer wait time standard is established and used for the U.S. Marine Corps, it will be difficult for DOD to form a comprehensive view of the performance of its entire global distribution pipeline. In overseeing distribution performance, TRANSCOM and DOD organizations have limited the reporting of the three time-based metrics up to the “point of need”—the location in the distribution system just prior to the “point of employment.” The nominal distance between the point of need and the point of employment is also known as the “last tactical mile.” As discussed earlier, according to DOD guidance, TRANSCOM and other responsible organizations are responsible for measuring the time between the submission of a customer order and receipt of the materiel by the supply support activity. In its role as the Distribution Process Owner, TRANSCOM interprets its authority and oversight responsibility to extend to the point of need but not to the point of employment. Overseeing distribution performance from the point of need to the point of employment is the responsibility of the given geographic combatant command in that theater. As discussed earlier, DOD established these authorities and responsibilities because the point of employment is a physical location designated by the commander at the tactical level where force employment and commodity consumption occurs or where unit formations come directly into contact with enemy forces. However, DOD’s definitions of its three metrics and its guidance for using them to measure distribution performance are silent on whether to measure the time for delivery to the point of employment or the point of need. Furthermore, officials from the Office of the DASD SCI, TRANSCOM, the Army, and the Marine Corps confirmed that the distribution performance data they report are up to the point of need and not to the point of employment, and therefore do not include the “last tactical mile.” According to combatant-command and military-service officials we spoke with, their oversight omits the last tactical mile because, in some instances, servicemembers responsible for ensuring that the receipt of the ordered materiel is completely and accurately documented may designate it a lesser priority compared to fulfilling their combat missions. We acknowledge servicemembers may and, in some cases, should place a higher priority on the unit’s mission, but taking action to ensure information at this level is collected, to the extent practical, would help provide decision makers with more-accurate and comprehensive data on distribution performance across the entire distribution pipeline. In our October 2011 report,visibility to the last tactical mile in Afghanistan. Specifically, we found that because neither the Distribution Process Owner guidance nor joint doctrine explains clearly how TRANSCOM is to exercise oversight of the entire distribution pipeline, TRANSCOM has focused primarily on overseeing the effectiveness only for delivery to the point of need in Afghanistan, while the performance up to the point of employment is the responsibility of U.S. Forces–Afghanistan and its subordinate units. However, DOD officials stated that U.S. Forces–Afghanistan did not report this performance assessment to TRANSCOM. Accordingly, we recommended that DOD revise the applicable guidance to clarify how TRANSCOM is to oversee the overall effectiveness, efficiency, and alignment of DOD-wide distribution activities, to include this last leg of distribution between the point of need and the point of employment. DOD did not concur with the recommendation, stating that TRANSCOM’s authority and oversight responsibility, based on internal guidance and Title 10 of the United States Code, extend to the point of need but not all the way to the point of employment. We acknowledged the department’s response, but stated that DOD’s distribution joint publication, its directive establishing TRANSCOM as the Distribution Process Owner, and the Joint Logistics (Distribution) Joint Integrating Concept suggest that TRANSCOM does have a role in overseeing efficiency and we found issues concerning the lack of synchronization DOD-wide, throughout the global distribution pipeline, including the last tactical mile. Furthermore, in this same report, we noted that DOD and its components have many transportation information systems and processes to track the movement of supplies and equipment to Afghanistan at the tactical level. For example, U.S. Forces–Afghanistan and its subordinate units use many systems and processes, such as the Battle Command Sustainment Support Structure, to track cargo delivery between locations in Afghanistan. However, this type of distribution information is currently not being incorporated into the three distribution metrics used by DOD for measuring performance of the entire distribution pipeline because the distribution metrics measure performance to the point of need. Incorporating available information at this level into DOD’s distribution metrics would help allow decision makers to more accurately and comprehensively measure distribution performance across the entire distribution pipeline. DOD may not have sufficiently reliable data to accurately determine the extent to which it has met the standards it has established for distribution performance, because it has not conducted regular comprehensive assessments of its data collection and reporting processes. Standards for Internal Control in the Federal Government state that control activities need to be established to monitor performance measures and indicators. These controls call for comparisons and assessments relating different sets of data to one another and state that a variety of control activities can be used in information processing, including edit checks of data. Moreover, internal control activities need to be clearly documented, and the documentation should be readily available for examination. Further, controls should be aimed at validating the propriety and integrity of performance measures. The questionnaires consisted of 24 questions regarding the timeliness, completeness, and accuracy of the data used by TRANSCOM and the military services to measure DOD’s performance against established time-definite delivery and customer wait time standards. responding to our data-reliability questionnaire, indicated that it had not conducted a risk assessment of its data. Questionnaire responses to the same questions from the Navy indicated that it had conducted a risk assessment, but the Army did not answer whether it had conducted a risk assessment of its data. In our past work, we identified several issues that indicate DOD’s distribution data may not be sufficiently reliable for measuring performance against its standards. For example, in our 2011 report on materiel distribution in Afghanistan, with some deliveries into Afghanistan that had missing delivery dates, which limited the usefulness of DOD’s distribution metrics. Specifically, we found that 42 percent of unit surface shipments and 19 percent of sustainment surface shipments with required delivery dates in 2008 through 2010 did not have a documented delivery date in the database. DOD concurred with our recommendation to develop an ongoing, systematic approach to identify the reasons why delivery dates for delivered surface shipments are not documented and implement corrective actions to improve the documentation of delivered surface shipments, and to develop an ongoing, systematic approach to investigate cases of undelivered surface shipments to determine their status and update the database with the most-current information. However, DOD did not provide any details as to how and when it would implement our recommendation, and based on the results of our current data-reliability questionnaires, it is not clear whether DOD has addressed these prior issues. GAO-12-138. DOD. For example, in questionnaire responses provided to us by the Army, officials stated that there are no controls separate from their data collection system to ensure accuracy and that errors sometimes occur, such as data indicating negative customer wait times (times of less than 0 days). The Navy and Air Force responded that they did have controls separate from their data-collection systems. In addition, officials we spoke with from TRANSCOM, the services, and several other DOD components told us of a number of potential inaccuracies in the data TRANSCOM uses to evaluate distribution performance. DOD officials said that in some cases units in combat zones delay entering records of new deliveries because personnel responsible for this task have other, higher-priority duties. Specifically, on forward operating bases, DOD officials stated that the priority was to complete the mission rather than completing paperwork as soon as a delivery is made. In these cases, the delivery data may be inaccurate because the recorded delivery date may be after the actual delivery was made. However, DOD officials said that delays in logging deliveries also occur in noncombat areas. Sometimes the logging of deliveries is delayed because the personnel responsible for this task are not present at the time of the deliveries. For example, an employee who teleworks or takes leave on a Friday may not log a delivery made on that Friday until the following Monday. As a result, the recording of the delivery date is delayed by 3 days. Such a delay would have the effect of adding 3 days to the logistics response time and time-definite delivery times recorded for that delivery. Additionally, DOD officials stated that some DOD personnel responsible for logging deliveries wait until several deliveries have been received and log them all at once rather than as they arrive. For example, DOD officials stated that some may set aside a time every week to log deliveries for that week, so that deliveries from earlier in the week are logged later than they actually were received. Setting aside time every week is a reasonable approach; however, in doing so, it is important that the actual date of delivery be captured and collected to ensure accuracy of the data to aid in assessing the performance of the delivery system. Moreover, we identified several concerns with regard to the data used to measure customer wait time. For example, in 2007, the DOD Inspector General reported that DOD officials lacked uniform results for measuring customer wait times because of differences in how the services measured and reported data. As previously mentioned, in the questionnaire responses provided to us by the Army, Navy, and Air Force, each service lacked at least some of the documentation that would be needed to provide assurance that internal controls were met. Notably, none of the services indicated, as a part of assessing data reliability, that they had documentation to support that they had conducted tests or evaluations of their data systems to collect and report customer wait time. Because DOD does not conduct and document regular comprehensive data-reliability assessments, the extent to which these or other data issues might affect the reliability of DOD distribution performance data is uncertain. Further, without data reliability assessments, it will be difficult for DOD to fully identify and correct any data gaps by taking appropriate actions to ensure that data supporting its distribution performance metrics are sufficiently reliable. In questionnaire responses, TRANSCOM stated that it relies on the systems that feed data to TRANSCOM to have its own data-quality processes in place. Therefore, TRANSCOM officials told us that one reason they do not assess the reliability of distribution data is that they have no authority to evaluate and address issues with respect to the military services’ systems and processes. DOD officials also acknowledged this lack of authority, but stated that the Office of the Secretary of Defense did have the necessary authority. However, the Office of the Secretary of Defense has not developed and enforced any policies to require data-reliability assessments to be conducted by DOD organizations involved in the collection and reporting of distribution performance data. Without a policy requiring regular comprehensive data- reliability assessments, DOD lacks reasonable assurance that organizations will conduct such assessments and data will be sufficiently reliable to effectively measure DOD’s performance in distribution. DOD has taken some actions to address gaps in its distribution performance, including establishing a distribution performance branch, combatant command performance reviews, and various workshops and boards. However, DOD has not developed a comprehensive corrective action plan that identifies and addresses root causes for gaps within its distribution performance. DOD has experienced a number of challenges in the area of distribution that have contributed to the department not being able to meet its performance standards. However, DOD has taken some actions to address these challenges. As previously mentioned, DOD’s supply chain management area—which includes distribution—has been on our high- risk list since 1990, in part because of issues with distribution performance. DOD has also reported in the past that it has consistently not met the department-wide standards it has established for itself. Reasons DOD cited for being unable to meet these standards include reception delays at supply warehouses and processing delays at aerial ports resulting from limited storage space for incoming cargo and available personnel to process the cargo. To address some of these gaps, DOD, specifically TRANSCOM and DLA, have developed and implemented targeted efforts that focus on improving specific areas of distribution. These include establishing a distribution performance management branch, combatant command performance reviews, and various workshops and boards. In order to address gaps in distribution, TRANSCOM has established several efforts. In August 2010, TRANSCOM issued guidance for a Distribution Performance Management Branch within its Strategy, Policy, Programs, and Logistics Directorate. The Distribution Performance Management Branch’s responsibilities include assessing global distribution performance and working with national partners to resolve problems; measuring and evaluating the effectiveness of distribution-process participating in the combatant command distribution conferences to assess distribution performance and collaborate to address and resolve problems; being the lead for negotiating distribution performance standards with maintaining and monitoring performance reviews; providing analyses for TRANSCOM and DOD performance being the focal point for development of strategic metrics to be used by TRANSCOM, the Joint Staff, and components; and maintaining visibility of TRANSCOM Distribution Strategic Metrics. The Distribution Performance Management Branch is to perform the above responsibilities specifically for DOD’s time-definite delivery distribution metric. Since the collection and analysis of distribution data are focused primarily on this distribution metric, the identification of distribution gaps and associated solutions is also primarily supported by analysis of performance data related to the time-definite delivery distribution metric. Distribution Performance Reviews and Workshops TRANSCOM also conducts monthly and quarterly reviews—with officials from the combatant commands and other stakeholders—of the combatant commands’ performance against the time-definite delivery standards. TRANSCOM holds monthly meetings with U.S. Central Command and quarterly meetings with each of the other geographic combatant commands. TRANSCOM collects and assesses the distribution performance of each geographic combatant command area of operation by segment (i.e., source, supplier, transporter, and theater), type (military or commercial), and mode of transportation (i.e., air, land, or sea) against the established time-definite delivery standards. According to TRANSCOM, this performance review aims to determine root causes for issues in performance, promote process improvement, explain variations within the system, and make any necessary changes to the business rules for distribution, rather than a comprehensive assessment of all capability gaps as discussed later in this report. In addition, TRANSCOM conducts time-definite delivery standards workshops with DOD distribution stakeholders to review past time-definite delivery performance and standards and develop revised standards. These workshops are attended by officials from the Office of the Secretary of Defense, the military services, the combatant commands, DLA, and other stakeholders; TRANSCOM serves as the focal point. Based on process improvements that were identified at the time-definite delivery workshop held in June 2014, officials informed us that DOD recently approved four distribution performance process improvements. These process improvement areas are: (1) analyzing extended theater performance, (2) understanding continental United States group small package process, (3) aligning Marine Corps afloat units with Navy afloat time-definite delivery standards, and (4) analyzing extended direct vendor delivery performance. Although these performance reviews and workshops are intended to improve distribution performance, they are focused on time-definite delivery performance and standards. As a result, the outcomes of these efforts, such as decisions made regarding standards, identification of root causes, and process improvement, are primarily based on, and limited to, data and information collected related to the time-definite delivery metric. Distribution Process Owner Strategic Opportunities Program In its role as the Distribution Process Owner, TRANSCOM also continues to implement the Distribution Process Owner Strategic Opportunities program, which began in 2008 as an effort to identify opportunities to significantly improve the performance of distribution processes DOD- wide. This effort was intended to identify an actionable set of opportunities—approximately five—that would generate substantial cost avoidances and significant improvements in DOD’s supply chain. In 2008, a Distribution Process Owner Strategic Opportunities project team began a process for identifying potential opportunities to pursue. The team first developed criteria for defining a potential “strategic opportunity.” Some of these criteria included falling within the scope of authority granted to the Distribution Process Owner, being based on strategies and processes proven to generate results in leading supply chains and applicable in the DOD environment, having a plausible path to implementation, and being able to produce measurable improvements. The project team identified over 38 possible strategic opportunities and, by September 2008, had narrowed the list down to five actionable efforts. In March 2009, the Distribution Process Owner Executive Board approved the five Distribution Process Owner Strategic Opportunities for implementation. As of November 2014, according to TRANSCOM officials, these efforts have resulted in $1 billion in cost avoidances through April 2013. However, although TRANSCOM officials cite significant cost avoidances, these avoidances are based on improvements made to capabilities and authorities that TRANSCOM has as the Distribution Process Owner. In this role, TRANSCOM is focused on a portion of distribution, not the entire distribution pipeline. DOD has also established multiple boards and groups at various levels for addressing distribution issues. The activities of these boards and groups include conducting discussions regarding distribution metrics and performance. The Distribution Steering Group is a working level group cochaired by TRANSCOM and DLA that comprises representatives from TRANSCOM, the Office of the Secretary of Defense, DLA, the military services, and the combatant commands. The group meets quarterly, or as deemed necessary by its membership, to discuss distribution topics and issues. The Distribution Oversight Council is an oversight body for distribution that meets at least twice a year, or as necessary, and is one level above the Distribution Steering Group. It comprises representatives from the same organizations as the Distribution Steering Group. The Distribution Process Owner Executive Board is a senior-level group chaired by the TRANSCOM Commander that is above the Distribution Oversight Council, with representatives from the same organizations as the two lower-level groups. Although these boards and groups meet annually, or as necessary, to discuss specific issues related to distribution, there is no focal point within DOD that oversees all three of DOD’s distribution metrics for the entire distribution pipeline. In our October 2011 report, we noted the importance of having a focal point in order to effectively provide oversight for distribution. We recommended that TRANSCOM, as DOD’s Distribution Process Owner, serve as that focal point to oversee the overall effectiveness, efficiency, and alignment of DOD-wide distribution activities. DOD did not agree with our recommendation and stated that the Distribution Process Owner’s authority and oversight responsibility extends to the point of need, not to the point of employment. However, we continue to maintain that language in DOD’s doctrine and policy documents suggests a role for TRANSCOM, as Distribution Process Owner or more broadly under its mission as a combatant command, to oversee activities within the DOD-wide global distribution pipeline and we continue to believe that DOD should implement the recommendation. DOD also has established a senior-level governance body for logistics called the Joint Logistics Board. The Joint Logistics Board reviews the status of the logistics portfolio and the effectiveness of the defense-wide logistics chain in providing support to the warfighter. The Joint Logistics Board is cochaired by the Assistant Secretary of Defense for Logistics and Materiel Readiness and the Joint Staff Director of Logistics, and has senior-level participants from the military services, combatant commands, and DLA. In an effort to reduce transportation costs to improve distribution, DLA began, in fiscal year 2014, implementation of Phase 1 of its Distribution Effectiveness effort, formerly known as the Strategic Network Optimization project, in collaboration with the military services and TRANSCOM. The project’s purpose is to optimize the global distribution network supporting the warfighter. The Distribution Effectiveness effort has three phases: network, inventory, and infrastructure. According to DLA, implementation of Phase 2 is underway as of November 2014. The program’s current goal is to achieve a total savings of $402 million in fiscal years 2014 through 2019, to include savings in infrastructure, inventory, and transportation. Other goals include increasing the utilization of dedicated truck routes and maintaining/improving customer service levels. In July 2011,implement a corrective action plan to address challenges in materiel distribution. Specifically, we stated that the corrective action plan should (1) identify the scope and root causes of capability gaps and other problems, effective solutions, and actions to be taken to implement the solutions; (2) include the characteristics of effective strategic planning, including a mission statement; goals and related strategies (for example, objectives and activities); performance measures and associated milestones, benchmarks, and targets for improvement; resources and investments required for implementation; key external factors that could affect the achievement of goals; and the involvement of all key stakeholders in a collaborative process to develop and implement the plan; and (3) document how the department will integrate these plans with its other decision-making processes; delineate organizational roles and responsibilities; and support department-wide priorities identified in higher-level strategic guidance. DOD disagreed with our recommendation and stated that the department is already engaged in major efforts to improve materiel distribution. we recommended, among other things, DOD develop and In our July 2011 report, we responded that while DOD for many years has had improvement initiatives for certain challenges within these areas, these challenges continue to plague DOD. Thus, developing and implementing a corrective action plan is critical to resolving supply chain management problems with a systemic, integrated, and enterprisewide approach. Our criteria for removing the high-risk designation—for supply chain management and other programs—specifically call for corrective action plans that identify the root causes of problems, solutions to these problems, and steps to achieve these solutions. Moreover, an effective strategic planning process that results in a high-quality corrective action plan can provide clear direction to addressing DOD’s weaknesses in supply chain management. DOD further commented that its involvement in major efforts to improve materiel distribution negates the need for a corrective action plan. DOD specifically referred to three efforts—(1) the Distribution Strategic Opportunities initiative, (2) the Strategic Network Optimization initiative, and (3) the Comprehensive Inventory Management Improvement Plan. DOD stated that each of these efforts has specific goals, milestones, and targets, and involves key stakeholders. However, the 2010 Logistics Strategic Plan, which was, at the time, the department’s most-recent high- level strategy for addressing supply chain management issues, as well as other logistics issues, describes the Distribution Strategic Opportunities initiative as an effort “to improve distribution across the enterprise” and included it among several other initiatives the department has to improve supply chain processes. The Logistics Strategic Plan provided no other explanation of this initiative; provided no goals, milestones, or targets associated with the initiative; and did not show how this initiative was to enable it to achieve high-level outcomes such as operating supply chains more effectively and efficiently. The plan, moreover, made no specific mention of the second effort—the Strategic Network Optimization initiative—although information provided separately by the department indicated it was a subinitiative under the Distribution Strategic Opportunities initiative. We have previously concluded that without a strategic planning process that examines root problems and capability gaps and results in a corrective action plan, it was unclear whether these initiatives alone would be sufficient for addressing all major challenges in materiel distribution. We further stated that DOD had demonstrated an ability to carry out a collaborative strategic planning process resulting in the issuance of its Comprehensive Inventory Management Improvement Plan. That plan identified corrective actions that could, when implemented, effectively address the requirements-forecasting focus area and other aspects of inventory management. We stated that following a similar collaborative approach that results in a corrective action plan for materiel distribution would result in significant progress in addressing remaining challenges in the supply chain management high-risk area. Although DOD has taken several actions to address its distribution challenges and improve distribution processes, these efforts to improve distribution are focused on a specific portion or segment of the process and are not based on an assessment of the entire distribution pipeline. Many of these efforts, such as the Distribution Process Owner Strategic Opportunities program and the Distribution Effectiveness effort, began in response to various issues or opportunities for improvement in distribution where solutions were developed without a strategy or plan for the distribution pipeline as a whole. Individual efforts to address identified gaps in distribution may lead to additional costs and other unanticipated results that may also affect DOD’s ability to effectively manage its distribution operations. Implementing our previous recommendation that DOD develop a comprehensive corrective action plan for distribution would help to identify and address root causes of distribution challenges and better position DOD to address distribution performance. DOD continues to make improvements in the area of distribution. The department has established metrics and standards, gathered data to measure its performance, and developed efforts to make improvements and address gaps in distribution. However, without revised guidance to help ensure the three distribution performance metrics address multiple priorities and provide useful information for decision making on matters such as cost, and without establishing and using a customer wait time standard for the U.S. Marine Corps, it will be difficult for DOD to form a complete picture of the performance of its entire global distribution pipeline. Further, without incorporating available distribution information at the last tactical mile into the distribution metrics, DOD may not have all the information it needs to effectively manage distribution. Moreover, without assurance that the data being gathered are reliable, DOD is not fully aware of how its distribution pipeline is performing against established standards. Until these issues are addressed, DOD is likely to continue to face challenges in effectively and efficiently managing its distribution pipeline. To help improve the management of DOD’s distribution performance, we recommend that the Secretary of Defense take the following four actions. To address the limitations of existing distribution performance metrics, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics, in conjunction with TRANSCOM, to revise guidance to ensure that the three distribution performance metrics incorporate cost; and a customer wait time standard is established and used for the Marine Corps. To address the limitations of existing distribution performance metrics and to begin gaining visibility over the last tactical mile, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics and TRANSCOM, in collaboration with the geographic combatant commands, to incorporate available distribution performance information at the last tactical mile level into the three key distribution metrics of logistics response time, time-definite delivery, and customer wait time. To ensure the reliability of DOD’s distribution performance data, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to develop and enforce policies to require data-reliability assessments to be conducted by DOD organizations involved in the collection and reporting of distribution performance data, such as TRANSCOM and the military services, to evaluate and address any gaps in its distribution performance data. We provided a draft of this report to DOD for review and comment. In its written comments, which are summarized below and reprinted in appendix II, DOD concurred with two of the four recommendations, partially concurred with one recommendation, and did not concur with one recommendation. DOD also provided technical comments, which we incorporated as appropriate. DOD partially concurred with the recommendation to revise guidance to ensure that the three distribution metrics incorporate cost. Specifically, DOD agreed that two of the three distribution performance metrics— logistics response time and customer wait time—should incorporate cost. DOD stated that the Assistant Secretary of Defense for Logistics and Materiel Readiness is identifying and capturing defense transportation data sources, supporting cost and performance metrics. DOD also stated that TRANSCOM fully supports these efforts, especially as cost might pertain to or be influenced by logistics response time and customer wait time. However, DOD did not agree that there would be value in any parallel effort to incorporate cost into the third distribution performance metric—time-definite delivery—because it maintains that this metric provides the standards to measure whether logistics response time performance is meeting expectations. DOD stated that it will instead use cost as a function of logistics response time to inform future assessments of and goals for time-definite delivery. According to DOD, this would better synchronize efforts to facilitate consistency in metrics reporting. Moreover, DOD stated that TRANSCOM has published policy and guidance reflecting the strategic requirement to understand cost and that current data and systems are often not conducive to cost analysis down to the level of individual shipments. DOD stated TRANSCOM is currently pursuing a major initiative to restructure and consolidate data systems to include a Common Record Movement which will, regardless of the mode of transportation, include cost estimates for each cargo movement. The effort also includes the development of an automated tool leveraging existing data systems that, once completed, should enable a better understanding of cost. We acknowledge that DOD’s readiness to incorporate cost into logistics response time and customer wait time will help address limitations in the measurement of distribution performance. However, we believe that incorporating cost into the time-definite delivery metric would be of value because the time-definite delivery metric is a distinct measure that is managed and reported separately from logistics response time. Specifically, as discussed in the report, logistics response time is monitored by the DASD SCI and time-definite delivery is monitored by TRANSCOM. Furthermore, according to the draft Supply Chain Metrics Guide used to evaluate DLA's Distribution Effectiveness initiative, the two metrics have different definitions, business values, goals, and computations. Since these two measures are separate, cost considerations should be included in both time-definite delivery and logistics response time. Until DOD’s guidance is revised to help ensure each of the three distribution performance metrics provide useful information for decision making on cost, it will be difficult for DOD to effectively manage and improve the performance of its entire global distribution pipeline. DOD concurred with the recommendation to revise guidance to ensure that a customer wait time standard is established and used for the Marine Corps. DOD stated that the Marine Corps has a service-wide customer wait time standard and, according to DOD, the average executed customer wait time is 15 days, based on the priority of the maintenance unit's request. DOD stated that this standard is published in Marine Corps Order 4400.16H, Uniform Materiel Movement and Issue Priority System. As of February 2015, the order does not state a set standard but estimates 15 days as the amount of time for delivery within the continental United States of an item that a unit requires for immediate use and without which the unit could not perform its mission. DOD stated that the Marine Corps will change the order within 180 days to more accurately reflect the definition and standard contained in DOD policy. We believe that this action, if fully implemented, would address the recommendation. DOD did not concur with the recommendation to incorporate available distribution performance information at the last tactical mile level into the three key distribution metrics. DOD cited its previous response to a similar recommendation in the October 2011 report, GAO-12-138, Warfighter Support: DOD Has Made Progress, but Supply and Distribution Challenges Remain in Afghanistan, stating that the Distribution Process Owner's (e.g., TRANSCOM’s) authority and oversight extend to the point of need, not the point of employment. DOD also stated that this distinction is made in DOD guidance, doctrine, and policy, and that the responsibility for the last tactical mile resides with the geographic combatant commander in the operational area. We acknowledge DOD’s position on the matter, but we continue to believe that this interpretation of the roles and responsibilities of the Distribution Process Owner results in fragmentation, because no one single DOD entity has visibility into the performance of the global distribution pipeline as a whole. As we noted in the report, DOD and its components have many transportation information systems and processes to track the movement of supplies and equipment to Afghanistan at the tactical level. However, this type of distribution information is currently not being incorporated into the three distribution metrics used by DOD for measuring performance of the entire distribution pipeline, because the distribution metrics measure performance only to the point of need. However, the point of need is not always the final destination, and materiel may require transportation beyond the point of need to customers in more remote locations. We continue to believe that incorporating available information at this level into DOD’s distribution metrics would help allow DOD to more accurately and comprehensively measure distribution performance across the entire distribution pipeline. DOD concurred with the recommendation to develop and enforce policies to require that data reliability assessments be conducted by DOD organizations involved in the collection and reporting of distribution performance data. To further improve distribution performance, DOD stated that it will develop a comprehensive, integrated approach to address systematic issues across the distribution network. DOD stated that this approach will include an assessment of distribution performance metrics data along with associated policy and guidance. We believe that these actions, if fully implemented, would address the recommendation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Acquisition, Technology and Logistics, the Secretary of the Air Force, the Commandant of the Marine Corps, and the TRANSCOM Commander. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) has established metrics to measure its distribution performance, we reviewed DOD guidance identifying distribution policies and priorities, such as DOD Instruction 4140.01, DOD Supply Chain Materiel Management Policy, and DOD Instruction 5158.06, Distribution Process Owner. We additionally reviewed the Government Performance and Results Act (GPRA) as amended by the GPRA Modernization Act of 2010 and our prior work that identifies elements that constitute a comprehensive oversight framework. We identified the definition and scope of DOD’s distribution performance measures and compared them to leading practices for achieving results in government and the successful attributes of performance measures.also interviewed officials from the Office of the Deputy Assistant Secretary of Defense for Supply Chain Integration (DASD SCI), U.S. Transportation Command (TRANSCOM), the Defense Logistics Agency (DLA), and each of the four military services to determine how they measure distribution performance and what data they collect and report. To determine the extent to which DOD is able to accurately measure its performance against its distribution standards, we obtained documentation on DOD data systems, such as TRANSCOM’s Strategic Distribution Database. We also sent data-reliability questionnaires to the military services and TRANSCOM. The standard set of questions we circulated asked detailed and technical questions about the relevant systems, such as the corresponding system architecture, the scope of user access, data-quality controls and limitations, and the respondents’ perceptions of data quality and limitations. We reviewed TRANSCOM’s 2012 annual report and spoke with agency officials from the Office of the DASD SCI, the services, TRANSCOM, and DLA to better understand these data. We compared the responses to standards for internal control within the federal government. We also reviewed prior GAO reports related to distribution performance. To determine the extent to which DOD has taken actions to identify causes and develop solutions for any gaps in distribution, we reviewed documents provided by TRANSCOM, including from TRANSCOM’s Distribution Performance Management Branch within its Strategy, Policy, Programs, and Logistics Directorate. Documents we reviewed to assess DOD distribution improvement efforts include TRANSCOM’s 2012 Annual Report and DOD’s Comprehensive Inventory Management Improvement Plan. We also observed TRANSCOM’s 2014 time-definite delivery standards workshop where TRANSCOM reviewed distribution performance and standards by working with officials from the Office of the Secretary of Defense, the military services, combatant commands, DLA, and other stakeholders. We spoke with officials from DLA, TRANSCOM, and the Office of the DASD SCI, Army, Navy, Air Force, and Marine Corps to discuss DLA’s Distribution Effectiveness effort. We met with officials from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, Office of the DASD SCI, Joint Staff J-4 Logistics Directorate, U.S. Central Command, and each of the four military services to discuss DOD’s planning, policy, and the degree to which DOD has taken actions to identify causes and develop solutions for any gaps in distribution performance. We conducted this performance audit from November 2013 to February 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Kimberly Seay (Assistant Director), Mitchell Karpman, Joanne Landesman, Ricardo A. Marquez, Christopher Miller, Mike Silver, Yong Song, Amie Steele, and Sabrina Streagle made key contributions to this report. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Defense Logistics: DOD Has Taken Actions to Improve Some Segments of the Materiel Distribution System. GAO-12-883R. Washington, D.C.: August 3, 2012. Warfighter Support: DOD Has Made Progress, but Supply and Distribution Challenges Remain in Afghanistan. GAO-12-138. Washington, D.C.: October 7, 2011. DOD’s High-Risk Areas: Observations on DOD’s Progress and Challenges in Strategic Planning for Supply Chain Management. GAO-10-929T. Washington, D.C.: July 27, 2010. Warfighter Support: Preliminary Observations on DOD’s Progress and Challenges in Distributing Supplies and Equipment to Afghanistan. GAO-10-842T. Washington, D.C.: June 25, 2010. Defense Logistics: Lack of Key Information May Impede DOD’s Ability to Improve Supply Chain Management. GAO-09-150. Washington, D.C.: January 12, 2009.
DOD operates a complex, multibillion-dollar distribution system for delivering supplies and equipment to U.S. forces globally. DOD's goal in operating this global distribution pipeline is to deliver the right item to the right place at the right time, at the right cost. GAO has reported on weaknesses in DOD's distribution performance and has identified management of DOD's entire supply chain as a high-risk area. This review assesses the extent to which DOD (1) has established metrics for its distribution performance, (2) is able to accurately measure its performance against distribution standards, and (3) has taken actions to identify causes and develop solutions for any gaps in distribution. GAO analyzed DOD's distribution metrics, DOD's responses to data-reliability questionnaires, and corrective actions, and interviewed DOD officials. To measure the performance of its global distribution pipeline, the Department of Defense (DOD) has established three metrics:(1) logistics response time—number of days between the time a customer submits an order and receives it, (2) customer wait time—number of days between the time a maintenance unit, a subset of customers, submits an order and receives it, and (3) time-definite delivery—a measure of the probability (e.g., 85 percent) that a customer will receive an order within an established logistics response time. However, these metrics do not provide decision makers with a complete representation of performance across the entire global distribution pipeline. DOD's definitions of its metrics and guidance for using them do not address cost, although DOD officials stated that cost is included in metrics used to assess other aspects of the supply chain, and the Marine Corps has not established a customer wait time metric. Further, although joint doctrine has set efficient and effective distribution “from the factory to the foxhole” as a priority, these metrics do not always include performance for the final destination. Unless DOD's guidance is revised to ensure the three distribution performance metrics include cost information for decision making and the Marine Corps establishes a customer wait time metric, and DOD incorporates metric performance to the final destination, it will be difficult for DOD to achieve a comprehensive view of the performance of its entire global distribution pipeline. DOD may not have sufficiently reliable data to accurately determine the extent to which it has met the standards it has established for distribution performance, because it has not developed policy for requiring regular comprehensive assessments to be conducted of its distribution data-collection and reporting processes. Several DOD organizations indicated that they had not conducted this type of review that would be consistent with standards for internal control in the federal government. Specifically, the Air Force indicated that it had not conducted a risk assessment of its data, a part of assessing data reliability. Officials GAO spoke with from U.S. Transportation Command (TRANSCOM), the services, and other DOD components described a number of potential inaccuracies, such as delivery dates recorded after deliveries were actually made, in the data TRANSCOM uses to evaluate distribution performance. Without a policy requiring regular comprehensive data-reliability assessments, DOD lacks reasonable assurance that organizations will conduct such assessments and that data will be sufficiently reliable to effectively measure DOD's performance in distribution. Although DOD has taken several actions to address gaps in its distribution performance, including conducting performance reviews, and holding workshops to assess problems and develop solutions, these efforts focus on specific areas of distribution, and DOD has not developed a comprehensive corrective action plan for the entire distribution pipeline that identifies the scope and root causes of capability gaps and other problems, solutions, and actions to be taken. In July 2011, GAO recommended DOD develop such a corrective action plan. DOD did not concur, citing several ongoing efforts. However, these efforts do not address gaps across all distribution operations. Thus, implementing GAO's prior recommendation would help identify root causes of and solutions to distribution challenges and better position DOD to address distribution performance. GAO recommends that DOD (1) revise guidance to ensure its metrics incorporate cost, (2) revise guidance to ensure the Marine Corps establishes a customer wait time metric, (3) incorporate performance information from the final destination, and (4) develop policy requiring data-reliability assessments. DOD concurred with the second and fourth recommendations and partially concurred with the first, stating that there would be no value to affix cost to time-definite delivery. DOD did not concur with the third recommendation, stating that data to the final destination should not be incorporated into DOD's performance metrics. GAO continues to believe the recommendations are valid, as discussed in the report.
In crafting the Results Act, Congress drew on the experiences of foreign governments and state and local governments in the United States and recognized that the results-oriented goal setting and performance measurement requirements of the Results Act would constitute a new way of doing business for many agencies. Congress also realized that the effective implementation of the Results Act may take several years. To advance this effort, the Results Act provided for a series of pilot projects so that agencies could gain experience and share lessons learned in implementing the key provisions of the Results Act before its governmentwide implementation. One set of these pilot projects covered the Act’s annual performance planning and reporting provisions. Over 70 federal organizations participated in this pilot phase, which covered fiscal year 1994 through fiscal year 1996. To further help agencies, several Members of Congress asked us to develop—on the basis of the experiences of leading foreign, state, and federal organizations—a guide for agency managers to use to effectively implement the Act. We observed in our June 1997 report on the implementation of the Results Act and related performance-based management initiatives that despite the rich body of experience the pilots provided, initial governmentwide implementation of the Results Act would be highly uneven. We identified a series of daunting implementation challenges and predicted that the initial set of agency strategic and annual performance plans would not be of consistently high quality or as useful for congressional and executive branch decisionmakers as they could be. At the request of several members of the congressional leadership, in May 1997 we issued a guide for congressional staff to use in assessing agencies’ strategic plans. We subsequently reviewed draft and September 30, 1997, strategic plans that agencies submitted to Congress. In our January 1998 summary report on our reviews of the September plans, we highlighted three difficult planning challenges that especially needed continued progress: setting a strategic direction, including establishing clear, results-oriented goals and performance measures; coordinating crosscutting programs; and ensuring the capacity to gather and use performance information. We suggested that agencies’ annual performance plans could help address these challenges. As a next step in our efforts to assist Congress and agencies in effectively implementing the Results Act, we issued two related guides—one for congressional decisionmakers and one for evaluators and others interested in more detailed assessments—on assessing annual performance plans. These guides, developed with the assistance of congressional staff, senior officials in agencies, members of the CFO Council, and others, integrated criteria from the Results Act, its legislative history, OMB’s guidance for developing the plans (OMB Circular A-11, part 2), and our work on implementation of the Results Act. The guides organize the Results Act’s criteria under three core questions that are aimed at ensuring that performance plans are useful for decisionmaking. The three core questions are: (1) To what extent does the agency’s performance plan provide a clear picture of intended performance across the agency? (2) How well does the agency’s performance plan discuss the strategies and resources the agency will use to achieve its performance goals? (3) To what extent does the agency’s performance plan provide confidence that its performance information will be credible? We noted that as agencies and Congress gain experience in developing and using annual performance plans, additional issues and questions will emerge. We therefore have committed to issuing a combined, updated version of our congressional and evaluators’ performance plan guides reflecting those experiences and providing examples drawn from the agencies’ plans illustrating useful presentations. An exposure draft of that guide will be issued this fall. At the most basic level, an annual performance plan is to provide a clear picture of intended performance across the agency. Such information is important to Congress, agency managers, and others for understanding what the agency is trying to achieve, identifying subsequent opportunities for improvement, and assigning accountability. We found that the plans did not consistently provide the succinct and concrete statements of intended performance that are needed to help guide decisions and subsequently assess actual performance. The plans generally were successful in showing how an agency’s mission and strategic goals were related to its performance goals. This is a very positive development because it provides a basis for using the performance plans to track progress toward the achievement of agencies’ long-term strategic goals. The plans were much less successful in providing assurance that crosscutting program efforts were sufficiently coordinated. Agencies appear to be taking the first step of identifying crosscutting efforts, with some including helpful listings of other agencies with which they share common goals. However, few plans provided any descriptions of how the agency will coordinate with other agencies regarding national issues for which they share responsibility or reflected other substantive coordination. Almost all of the annual performance plans that we reviewed contained at least some objective, quantifiable, and measurable annual performance goals—a key expectation of Congress in enacting the Results Act. Overall, however, the annual performance goals and accompanying measures in the plans would need significant development to improve the usefulness of the plans to congressional and other decisionmakers. Specifically, we found that the goals in the annual performance plans often were not as results-oriented as they could be; the relationship between goals and performance measures at times was either neglected or obscured; and the plans did not consistently set goals to address major management problems, as suggested in OMB guidance. On the other hand, some of the plans provided very helpful baseline and trend data for performance goals. Such information allows users of plans to judge whether performance targets are appropriate and reasonable based on past performance. Goals in the performance plans that we reviewed typically focused on program outputs, such as the number of products and services delivered by the agency. The Results Act allows agencies to include output goals in their plans, and such goals can provide important information for agency managers to use in managing programs. However, the Act envisions that agencies’ plans would contain goals that focus on the results that programs are intended to achieve, which is particularly important for policymakers. We found that the annual performance plans did not consistently contain such results-oriented goals. For example, the Social Security Administration (SSA), which was responsible for expenditures of about $400 billion in 1997—constituting one-fourth of the federal budget—did not consistently have results-oriented performance goals. For its high-risk Supplemental Security Income program, SSA’s plan included output goals on the number of claims processed and the number of nondisability redeterminations, but it did not include results-oriented goals. Likewise, for SSA’s Old Age and Survivors Insurance (OASI) and Disability Insurance (DI) programs, the performance plan had output goals related to the number of beneficiaries served, but it did not contain results-oriented goals related to services these beneficiaries receive. However, the governmentwide performance plan’s chapter on Social Security included a results-oriented discussion of the effect of Social Security on reducing poverty among the elderly in addition to addressing the number of beneficiaries served by the OASI and DI programs. Having output-oriented goals will provide decisionmakers with important information but will not directly provide a perspective on the degree to which the program is accomplishing the results it is intended to achieve. In crafting the Results Act, Congress recognized that for some types of federal programs it may not be feasible for an agency to express its performance goals in an objective, quantifiable, and measurable form. The Results Act therefore allows an agency to propose, and OMB to authorize, that a goal be expressed in an alternative form, such as by describing a minimally effective program and a successful program. Although few agencies used the alternative form of measurement for fiscal year 1999, the experiences of the National Science Foundation (NSF) suggested how alternative forms of measurement could be employed. The agency’s performance plan used such alternative descriptions to establish annual performance goals for its scientific research and educational activities. For example, NSF’s plan described annual success in addressing the agency’s strategic goal of promoting scientific discovery as occurring when the agency’s awards lead to important discoveries and new knowledge within and across traditional disciplinary boundaries. NSF’s plan described corresponding minimal effectiveness as occurring when there is a steady stream of outputs of good scientific quality. By establishing definitions for successful and minimally effective levels of performance, NSF’s descriptions allowed the agency’s performance to be assessed, both by congressional and executive branch decisionmakers and by expert reviewers that NSF plans to use. NSF could build on its approach to measurement by better explaining what it means by such phrases as “important discoveries” and “steady stream of outputs of good scientific quality.” One way to do this would be to provide examples of past discoveries that illustrate each of the descriptive statements. In contrast to many agencies, the Department of Health and Human Services’ (HHS) Centers for Disease Control and Prevention (CDC) had numerous concrete measurable results-oriented performance goals. For example, CDC had a results-oriented goal and measure to reduce the incidence of congenital syphilis in the general population from the 1995 rate of 39 per 100,000 live births to less than 30 in fiscal year 1999. CDC’s program outputs related to its goal, such as targeted prenatal screenings for congenital syphilis, were presented as the strategies CDC will use to achieve its end result rather than as the programmatic end in itself. Such a presentation suggests a clear understanding of the relationships and differences between the activities an agency undertakes and the results it hopes to achieve. In addition, when CDC used output-oriented performance measures in some cases, it explained why it used such measures rather than more results-oriented measures. The section on chronic disease prevention, for example, noted that health outcome measures for these programs have been difficult to define for a number of reasons, including the long latency period of chronic diseases like cancer and heart disease. One of the major challenges that agencies face in moving from a focus on the activities they undertake to results they are trying to achieve is to develop performance measures that clearly and sufficiently relate to the performance they are meant to assess. At CDC, for example, the performance goal to reduce the incidence of congenital syphilis was clearly and sufficiently represented by the performance measure of reducing the occurrence of congenital syphilis from 39 per 100,000 live births to less than 30 per 100,000 live births. Far more typical were situations where the relationships between what is being measured and desired results are not sufficient. For example, three of the Department of Labor’s (Labor) performance goals used the number of complaints received as measures of compliance with worker protection and civil rights laws. In one case, a decrease in the number of discrimination complaints filed by federal grant recipients and persons with disabilities in state and local governments was used to indicate progress towards the goal of ensuring that workplaces are fair for these groups. Used alone, such a measure is a questionable indicator of fairness in the workplace. A decrease in the number of complaints could also be a function of lack of information, fear, a complainant’s lack of confidence in Labor’s enforcement, or a tendency of agency managers to discourage the filing of otherwise meritorious complaints. An expanded or improved enforcement program could produce an increase in complaints as workers gain confidence in the enforcement agency. Without other independent measures that also demonstrate the existence of a fair workplace, measuring the decrease in complaints may be insufficient. Also, programs often must achieve multiple goals or goals with several dimensions that reflect such competing demands or priorities as quality, timeliness, program cost, and outcomes. Annual performance plans that do not contain balanced sets of measures may not sufficiently assess all aspects of a goal or multiple goals for the agencies’ programs. One of the key priorities missed in many plans was cost. For example, the Office of Personnel Management’s (OPM) plan did not appear to have cost-based performance measures to show how efficiently it performs certain businesslike operations, such as the administration of health and retirement programs. On the other hand, the Department of Veterans Affairs’ (VA) plan provided both financial and nonfinancial measures for some of its program areas. For example, VA’s performance plan contained measures that addressed various program priorities, such as the accuracy and timeliness of claims processing, unit costs of providing benefits and services, and customer satisfaction with VA services. In addition, agencies’ performance measures did not always have a clearly apparent or commonly accepted relationship to the performance goals. At the Department of the Interior’s (Interior) National Park Service (NPS), some performance measures did not provide clear definitions of the criteria that would be used to accurately assess the performance. For example, one of NPS’ performance goals was to ensure that 50 percent of the cultural landscapes on its Cultural Landscapes Inventory were in good condition. However, the plan did not define “good condition” or make reference to where such a definition could be found. Without such a definition, neither the precise relationship between the measure and the desired result nor whether performance is being measured consistently from year to year can be determined. According to OMB guidance, an agency’s annual performance plan should also set performance goals to address major management problems that are mission-critical or impede the agency’s ability to meet its programmatic goals. We found, however, that agencies did not consistently set goals to address major management problems. Most significantly, the governmentwide performance plan’s first priority objective is to ensure that agencies’ business processes and supporting systems function successfully in the year 2000 and beyond. Even though there were agency plans that otherwise acknowledged this issue, most of the agencies’ plans neglected to include steps to address it. For example, in the Small Business Administration’s (SBA) case, although its performance plan discussed actions SBA planned to take to help small businesses deal with the Year 2000 problem, the plan did not discuss or provide information on SBA’s efforts to resolve the agency’s own Year 2000 problems. In addition, the Interior Departmental Overview section of its performance plan listed ensuring that the Department’s critical information systems and processes are Year 2000-compliant by March 31, 1999, as a strategic goal for the Department. However, six of the eight subagency plans did not address the problem. On the other hand, some agencies’ plans included performance goals and measures to show how the Year 2000 issue would be addressed. OPM had a fiscal year 1999 annual performance goal to ensure that OPM’s information technology systems would operate properly on and after January 1, 2000. One of OPM’s measures for this goal was that the agency’s systems would meet or improve on the OMB-established governmentwide target dates for Year 2000 compliance in that all systems would be renovated by September 1998 and would be implemented in a Year 2000-compliant environment by December 1998. Agencies that go beyond the requirements of the Results Act and include baseline or trend data for their performance goals in their annual performance plans provide a more informative basis for assessment of expected performance. Reliable baseline and trend data are helpful to providing congressional and other decisionmakers with a context for assessing whether performance goals are reasonable and appropriate and suggesting areas for questions about how planned performance improvements will be achieved. For example, the Department of Commerce’s (Commerce) annual performance plan generally provided performance data from fiscal year 1997 when available, the fiscal year 1998 goal, and the fiscal year 1999 goal. As an illustration, the performance plan showed that although the fiscal year 1998 goals for the lead times and the accuracy of flash flood warnings were the same as the actual levels achieved in fiscal year 1997, performance improvements were planned in both areas for fiscal year 1999. The annual performance plans that we examined were generally successful in showing how the agencies’ missions and strategic goals were related to their performance goals. Indicating such relationships is important for showing how an agency will chart annual progress toward the achievement of its long-term strategic goals. The Environmental Protection Agency’s (EPA) performance plan included one of the most effective presentations in this regard. This plan included the mission statement and goals from the strategic plan and a section on the relationship between the two plans, including changes in the strategic goals and objectives since the strategic plan was issued. In addition, the plan was primarily organized by the strategic goals and objectives, with performance goals, resources, strategies, and performance measures grouped by strategic goal and objective. As required by the Results Act, agencies generally developed performance plans that covered all program activities in their budget requests—with many agencies establishing for the first time a direct connection among plans, budgets, performance information, and the related congressional resource allocation and oversight processes. Agencies used various approaches to make this connection. Some provided descriptions or tables that associated performance goals with their existing program activities, and others took advantage of the flexibility provided by the Results Act to aggregate, disaggregate, and consolidate program activities to indicate coverage. For example, EPA’s and the Department of Transportation’s performance plans showed a relationship between performance goals and program activities by linking them to strategic goals and/or objectives. Whether the agencies used existing program activities or aggregated, disaggregated, or consolidated program activities, the most useful linkages indicated how funding from the agency’s program activities would be allocated to a discrete set of performance goals. In contrast, the performance plans that associated one or more performance goals with one or more program activities were less informative in this regard. This is because such associations made it difficult to determine whether all activities were substantively covered or understand how specific program activities were intended to contribute to the agency’s results. For example, the SBA plan did not convey which performance goals covered which program activities or whether all of SBA’s program activities were covered by performance goals. The Interior plan contained some goals for NPS that were not associated with any program activities, even though the goals apparently required some funding. For example, NPS’ goal of improving the quality of its employee housing through removing, replacing, or upgrading units was not related to any program activity. Although the Department of Housing and Urban Development’s (HUD) plan related some of the agency’s program activities to its goals, it did not cover all of HUD’s program activities or explain whether those activities were aggregated, disaggregated, or consolidated. For example, HUD did not explain which performance goals covered the $310 million drug elimination grants for its low-income housing program. Over the last several years, we have produced a body of work pointing to mission fragmentation and overlap in a wide variety of federal program areas. Our work has shown that uncoordinated program efforts can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. We have suggested that agencies’ efforts under the Results Act provide a potentially effective vehicle for ensuring that crosscutting program goals are consistent; strategies are mutually reinforcing; and, as appropriate, progress is assessed through the use of common performance measures. Last fall, when we reviewed agencies’ strategic plans, we stressed that coordinating crosscutting programs can be a difficult and time-consuming process. To underscore our concern, we highlighted the issue as one of the most difficult planning challenges requiring continued progress. In our review of agencies’ September 1997 strategic plans, we found that those plans provided better descriptions of crosscutting programs and coordination efforts than the agencies’ draft strategic plans. The most useful of the strategic plans contained presentations that listed other agencies involved in crosscutting program areas and outlined approaches to coordinating such areas with those agencies. We noted that such presentations illustrated the magnitude of, and provided a foundation for, the much more difficult work that lies ahead—undertaking the substantive coordination that is needed to ensure that crosscutting programs are effectively managed. Since then, agencies appear to have made uneven progress. Our review of agencies’ annual performance plans suggested that the needed first step is now being taken more consistently—the plans often identified crosscutting efforts, and some included helpful listings of other agencies with which responsibility for addressing similar national issues is shared. However, few plans attempted the more challenging description of how the agencies expected to coordinate their efforts with those of other agencies or reflected the existence of substantive coordination. As an illustration, Commerce identified a number of other federal agencies’ programs that are related to Commerce’s three strategic themes and its bureaus’ activities. However, the agency’s performance plan did not indicate how Commerce would work with these other agencies in addressing shared activities. For example, the plan associated 12 other federal agencies with Commerce’s International Trade Administration through the Trade Promotion Coordinating Committee. However, neither the plan nor the supporting congressional budget justification documents explained how Commerce can use its key role in chairing the Committee to accomplish Commerce’s strategic goal of implementing the national export strategy. Similarly, the National Aeronautical and Space Administration’s (NASA) performance plan also took the first step of identifying other agency or international partners involved in specific efforts related to NASA’s efforts. However, it did not discuss the extent to which NASA had coordinated with other agencies in establishing the goals, objectives, and associated performance targets. For example, in describing the objective of developing next-generation computational design tools, the plan indicated that NASA’s efforts were part of the Federal High Performance Computing and Communications initiative. However, there was no discussion about whether NASA coordinated its performance target of a 200-fold improvement with other federal partners; nor was there an explanation of how NASA’s effort will contribute to the overall federal initiative, separately from the contributions that other agencies will make. A few performance plans were more useful in that they discussed how agencies expected to coordinate efforts with other agencies that have similar responsibilities. Similar to the most useful strategic plans, such discussions underscored the magnitude of the coordination work that lies ahead. For example, Education’s plan contained not only an extensive list of other agencies with which the Department shares a common result but also a general discussion of its coordination efforts and plans. For its strategic objective that every state have a school-to-work system that increases student achievement, improves technical skills, and broadens career opportunities for all, Education’s plan indicated that the agency plans a coordination effort with Labor to jointly administer the National School-to-Work Office Program and improve the management of that program by aligning grant-making, audit, technical assistance, and performance reporting functions. Education can build on its foundation by identifying (1) performance goals that reflect its crosscutting programs, (2) how Education and other agencies will work to ensure that program strategies are mutually reinforcing, and (3) whether any common performance measures are to be used. In general, the annual performance plans did not provide sufficiently complete discussions of the strategies and resources that agencies will use to achieve their performance goals. Discussions in the plans of agency strategies, which can include program initiatives, partnerships, and operational processes, frequently did not yield clear understandings of how the strategies were to lead to improved agency performance and the achievement of annual performance goals. Moreover, the plans often lacked complete discussions of the capital, human, financial, and other resources that the agencies needed to achieve their goals. The absence of fully developed discussions relating strategies and resources to goals undermined the usefulness of the plans. As a result, congressional and other decisionmakers do not have complete information on which to judge the reasonableness of an agency’s proposed strategies and resources needed to achieve its goals. Most agencies’ annual performance plans did not clearly describe how the performance goals would be achieved. The performance plans often provided listings of the agencies’ current array of programs and initiatives but provided only limited perspective on how these programs and initiatives were necessary to or helpful for achieving results. For example, the General Services Administration (GSA) performance plan provided descriptive information on GSA’s activities as opposed to specific strategies for achieving performance goals. One of GSA’s performance goals was to increase market share for its vehicle fleet program. Although the plan contained measures and target levels for fiscal years 1998 and 1999, the accompanying narrative gave little indication of how GSA intended to achieve the target levels. Instead, the plan provided general statements about leveraging GSA’s competitive pricing with broad market penetration and government downsizing. The plan offered no information on a specific approach or strategy for how GSA would achieve broad market penetration or take advantage of downsizing to meet the market share target levels for its vehicle fleet program. Far less typical, but far more useful in our view, was the approach taken by the Federal Emergency Management Agency (FEMA), which generally presented strategies that were clear and appeared logically related to annual performance goals. For example, to achieve its strategic goal of protecting lives and preventing the loss of property, FEMA’s performance plan contained a performance goal to increase Project Impact communities in each state for its Project Impact program, which is designed to promote predisaster mitigation. Strategies for achieving this goal included working with states and federal agencies to identify candidate communities, providing grants as seed funding, providing technical information, and monitoring progress. We also observed that most agencies did not build on the work they had done last fall in developing their strategic plans where they were to identify factors external to the agency that would affect the degree to which they achieve their strategic goals. Such factors could include economic, demographic, social, technological, or environmental factors. Assessments of external factors help agencies and Congress judge the likelihood of an agency achieving its strategic goals and the actions needed to better meet those goals. Similar to the situation with strategic goals, discussions of the influence external factors can have on annual performance goals, although not required by the Results Act, can provide important context for understanding both the factors other than agency performance that can affect whether goals are achieved and the adequacy of the agencies’ plans for mitigating negative factors and taking advantage of positive ones. The value of including such an analysis in annual performance plans is shown by SBA’s plan. SBA provided an informative discussion of external factors, such as the economy and continued support from stakeholder and program partners, that might affect the agency’s ability to achieve performance goals related to its strategic goal to become a 21st century leading-edge financial institution. Within this context, the plan also included a discussion of actions SBA can take to mitigate these factors. Most agencies’ annual performance plans did not adequately describe capital, human, information, and financial resources and relate them to achieving performance goals. Even in cases where the achievement of performance goals seems to depend on increased staffing levels or capital expenditures, such increases were not always described in the plans. For example, VA included a discussion concerning the expansion and construction of its cemetery system, but it does not identify the additional dollars, people, or equipment necessary to achieve this goal. Addressing information technology issues in annual performance plans is important because of technology’s critical role in achieving results, the sizable investment the federal government makes in information technology (about $145 billion between 1992 and 1997), and the long-standing weaknesses in virtually every agency in successfully employing technology to further mission accomplishment. The vital role that information technology can play in helping agencies achieve their goals was not clearly described in agency plans. In the absence of such discussions, the plans also generally did not reference other appropriate documents that might contain information on the agencies’ technology plans. The failure to recognize the central role of technology in achieving results is a cause of significant concern because, under the Paperwork Reduction and Clinger-Cohen Acts, Congress put in place clear statutory requirements for agencies to better link their technology plans and information technology use to their missions and programmatic goals. Without at least some discussion of information technology, agency plans are not complete, and their usefulness to congressional and other decisionmakers is accordingly undermined. The Department of State’s (State) and Interior’s performance plans were fairly typical of all agencies’ plans in terms of the lack of attention to technology issues. Although the State plan discusses upgrading the information technology infrastructure, along with other resources, it does not address how such an upgrade would be used to improve performance or help achieve specific performance goals. At Interior, with the exception of the United States Geological Survey’s plan, the subagency plans generally did not discuss how information technology will be used to help achieve annual performance goals or improve performance for long-term objectives. As discussed earlier in the report, most agencies developed performance goals that cover the program activities in their budget requests. In addition, OMB Circular A-11 states that agencies should display, by program activity, the funding level being applied to achieve performance goals. However, most agencies did not clearly convey in their annual performance plans the amount of funding needed to achieve a discrete set of performance goals. For example, as illustrated in figure 1, VA aggregated program activities under specific budget accounts and consolidated these activities across those budget accounts to align those accounts (and the underlying program activities) with groupings of performance goals for two of its business lines—(1) Education and (2) Vocational Rehabilitation and Counseling. Although this association established a relationship between numerous program activities and numerous performance goals, it did not identify the funding levels that are needed to achieve discrete sets of goals. As a result, the plan did not convey how requested funds under the related program activities will be allocated to achieve performance. In contrast, by identifying how much funding is needed to support discrete sets of performance goals and showing where that funding was included in the agency’s budget request, an agency’s annual performance plan can give Congress and other decisionmakers information needed to relate decisions about desired performance with decisions about funding levels. A few agencies, such as EPA and some components of the Department of Treasury (Treasury), proposed changes to their program activity structures to better facilitate such an allocation and establish a clearer connection between budgetary decisionmaking and performance. For example, EPA proposed a uniform program activity structure across all of its budget accounts in which each program activity represents one of the agency’s strategic goals. Using this proposed program activity structure in its performance plan, EPA showed by account the funding it is requesting to achieve each strategic objective and the supporting annual performance goals. Figure 2 illustrates this relationship. Similarly, Treasury’s Bureau of Alcohol, Tobacco and Firearms (ATF) has changed its program activities to align them with its strategic goals. Because ATF’s revised program activities reflect its strategic goals, its performance plan clearly showed how ATF would allocate its resources and the bureau’s performance goals can be readily and logically related to the program activity structure. Figure 3 shows the relationship described between performance goals and program activities in ATF’s plan. Credible performance information is essential for accurately assessing agencies’ progress towards the achievement of their goals and, in cases where goals are not met, identifying opportunities for improvement or whether goals need to be adjusted. Under the Results Act, agencies’ annual performance plans are to describe the means that will be used to verify and validate performance data. However, the majority of the plans we reviewed provided only limited confidence that performance information would be credible. Specifically, although most of the plans describe procedures for verifying and validating performance information, these plans lack specific details on the actual procedures the agencies will use. In addition, few plans include a discussion of the known limitations in the agencies’ existing data systems. Of those plans that did include such a discussion, almost none discussed the agency’s strategies to address the known limitations. In our report on agencies’ September 30 strategic plans, we noted that many agencies had long-standing and serious shortcomings in their ability to generate reliable and timely performance data. We suggested that the annual performance plans provided an opportunity to articulate how these shortcomings will be addressed. Performance plans can help do this by including a discussion intended to provide confidence that the means agencies will use to verify and validate performance information, such as audits, program evaluations, independent external reviews, and internal controls, will yield performance data of sufficient quality to support decisionmaking. Some performance plans, such as NASA’s and SSA’s, did not appear to describe any verification and validation procedures that these agencies used or expected to use to ensure that performance data are sufficiently complete, accurate, and consistent. Most of the plans provided descriptions of procedures for verifying and validating performance data, but these descriptions were often superficial. For example, although SBA’s plan included a general discussion of verification and validation procedures, it did not specify how the agency would ensure that its performance data are credible. Specific verification and validation systems, related measures, and the milestones for verification and validation generally are not cited in the plans. The Nuclear Regulatory Commission’s (NRC) plan discussed validating the list of primary systems and measuring levels of satisfaction with the accuracy and availability of information in the systems, but it did not discuss how NRC intends to actually validate these data to ensure that they are accurate and complete. The performance plans of some other agencies, such as Labor, mentioned only that their Inspectors General (IGs) would be responsible for auditing the agencies’ data systems. Agencies that expected to use the IGs generally suggested that the review of performance data would be done as part of the annual financial audit of the agency. The IGs and external auditors can make important contributions toward ensuring that performance data are valid, but these contributions cannot substitute for management actions to ensure that the data are sound. Moreover, these plans generally did not discuss whether agencies had coordinated with their IGs to perform this work. Agencies and IGs need to jointly determine how the IGs’ resources could best support verification and validation efforts, given the IGs’ continuing audit responsibilities. In making this determination, agencies need to carefully consider the most appropriate means for verifying and validating performance information. Education and HHS’ Indian Health Service (IHS) proved to be exceptions regarding the verification and validation of performance data. Their performance plans included a variety of specific and credible procedures to ensure that Congress, agency managers, and other decisionmakers will have performance information of sufficient credibility to support decisions. For example, Education’s plan included such procedures as a mix of audits, independent external reviews, and program evaluations, as well as the scope and timing of what its IG would be undertaking. IHS’ plan included such procedures as performing editing checks, monitoring the reasonableness of data, and developing software to allow for the transmission of data to a centralized database. Separate from the issues associated with the need to ensure that performance data are verified and valid, we continue to be concerned about the lack of a capacity in many federal agencies to undertake the program evaluations that will be vital to the success of the Results Act. In reviewing agencies’ strategic plans, we found that many agencies had not given sufficient attention to how program evaluation will be used in implementing the Results Act and improving performance. More recently, we reported that agencies’ program evaluation capabilities would be challenged to meet the new demands for information on program results. We found that the resources allocated to conducting program evaluations were small and unevenly distributed across the 13 departments and 10 independent agencies we surveyed for that report. The findings of that report are a major concern because a federal environment that focuses on results—where federal efforts are often but one factor among many that determine whether and if goals are achieved—depends on program evaluations to provide vital information about the contribution of federal efforts. For example, the success of the United States Agency for International Development (USAID) in achieving its intended results is affected by many factors and programs beyond USAID’s control. Development programs of the international donor community and the governments and institutions within the developing countries themselves all can have greater or lesser influences on advancing social and economic development. USAID, as well as other agencies, can use program evaluations to help isolate the degree to which its efforts are contributing to results and what actions it specifically can take to better meet its goals. In general, agencies’ annual performance plans did not include discussions of known data limitations and strategies to address them. Such limitations can be a significant challenge to performance measurement. Over the years, we and others have identified problems with the financial and information systems at several agencies. The recent governmentwide financial statement audit further raised concerns about the reliability of data. The amount of progress still needed to obtain high-quality financial data suggests the types of challenges that agencies will face in obtaining high-quality performance data. Agencies face particular challenges when they must rely on other organizations to provide important performance information. For example, State’s performance plan acknowledges that the agency is to rely on data from external sources to measure performance. However, the plan did not describe how limitations in the quality of that data would affect efforts to assess and improve performance. These data limitations included inconsistencies in data collection from location to location; from year to year; or from one data source to another, especially when data from more than one source must be combined to measure performance. The plan would be more useful if it recognized and identified significant data limitations and their implications for assessing performance. Education’s performance plan contained a good example of an agency’s recognition of data limitations. In the quality of performance data section of its plan, Education stated that ensuring the accurate and efficient collection of its student loan data is vital to achieving one of its strategic goals. However, we and Education’s IG have previously reported about the inadequacy of Education’s student loan data. Because of these inadequacies, Education had been unable to report on the Department’s financial position in a complete, accurate, and reliable manner. In its performance plan, Education acknowledged that its student aid delivery system has suffered from significant data quality problems. The plan outlined several steps the Department plans to take to address these problems, including improving data accuracy by establishing industrywide standards for data exchanges, receiving individual student loan data directly from lenders, expanding efforts to verify data reported to the National Student Loan Data System, and preparing a systems architecture for the delivery of federal student aid. The fiscal year 1999 annual performance plans represent the first attempt across all executive agencies to carry out the annual performance planning called for under the Results Act. Although these plans collectively suggested that annual performance planning, as established under the Act, can be a powerful device for better informing congressional and executive branch decisionmaking, substantial further development is needed before the plans will consistently be able to support that goal. In crafting the Results Act, Congress understood—and similar foreign experiences confirmed—that effectively implementing management changes of the magnitude envisioned under the Act would take several cycles, although each cycle should see marked improvements over the preceding ones. Both OMB’s fiscal year 1999 annual performance plan and the fiscal year 1999 governmentwide performance plan contain commitments to implement the Results Act. The OMB annual performance plan includes a goal to improve the performance of government programs by meeting the statutory requirements of the Act. The governmentwide performance plan includes Results Act implementation as 1 of 22 priority management objectives that the administration will focus on in 1999. According to a senior OMB official, OMB incorporates lessons learned from performance plans in the guidance it provides agencies on developing future plans. OMB has also committed to reviewing agencies’ subsequent plans to ensure that improvements and appropriate changes are made. On the basis of our review of the agency plans, it appears that OMB can build on its commitment and design and implement a broad, aggressive performance planning improvement effort. Specifically, our work suggests that giving priority attention to the following key opportunities for improvement will lead to the greatest increases in usefulness: Better articulating a results-orientation. Agency performance plans could be more useful if they more consistently incorporated results-oriented goals and showed more direct relationships among goals and measures. More results-oriented agency goals will also facilitate understanding the relationships among agencies’ efforts and planned contributions and goals included in the governmentwide performance plan. The value of agencies’ annual performance plans also could increase if the plans consistently included useful and informative baseline and trend data, as some agencies did in their 1999 performance plans. Such data provide decisionmakers with a context for assessing whether performance targets are appropriate and reasonable. The value of the performance plans could also be augmented if they more fully included goals that addressed mission-critical management issues (for example, historic problems in maximizing the use of information technology). Precise and measurable goals for resolving mission-critical management problems are important to ensuring that the agencies have the institutional capacity to achieve their more results-oriented programmatic goals. Consistently including goals in individual agency plans to address mission-critical management issues also will facilitate the integration of governmentwide and agency performance planning processes. Section IV of the governmentwide annual performance plan is devoted to improving performance through better management and lists the administration’s priority management objectives. We found in our review of the governmentwide plan that the clarity and effectiveness of OMB’s discussion of the objectives in that plan could be improved by a more integrated and focused discussion of the strategies associated with the objectives. Augmented agency performance plans can be helpful in this regard by showing that agencies, where appropriate, are positioned to address governmentwide priority management objectives, such as the Year 2000 computing crisis. To facilitate this effort, we recommend in our review of the governmentwide performance plan that OMB ensure that the governmentwide management priorities and performance goals contained in the governmentwide performance plan be reflected in relevant agency performance plans. Coordinating crosscutting programs. Our work has suggested that program overlap and mission fragmentation are important issues that need to be addressed. We also have noted, consistent with OMB’s guidance, that a focus on results implies that crosscutting programs will be coordinated. At the time of our reviews, many agencies’ annual performance plans identified crosscutting efforts, with some listing other agencies with which they shared the same or similar result, but the substantive work of coordination was not yet apparent. Specifically, few of the plans showed evidence of the work necessary to ensure that crosscutting programs have mutually reinforcing goals; complementary strategies; and, as appropriate, common performance measures. Not surprisingly, given the amount of coordination that still needs to take place, in our review of the fiscal year 1999 governmentwide performance plan we found that substantial opportunities exist for enhancing the discussion of crosscutting efforts in that plan as well. By building on the initial progress that some agencies have made, the usefulness of performance plans could be enhanced if all agencies more consistently identified the results-oriented annual performance goals that involve other federal agencies and set intermediate goals that clarify the agency’s specific contribution to the common result. Moreover, because of the still early state of coordination of crosscutting programs, the more useful plans will continue to describe relevant interagency coordination efforts. Clearly showing how strategies will be used to achieve goals. Although not explicitly required by the Results Act, the more useful annual performance plans discussed how the strategies and approaches would lead to results. The listings of current programs and initiatives that often were included in agencies’ plans are useful in providing an understanding of what agencies do. Presentations that more directly explain how programs and initiatives achieve goals will be most helpful to Congress as it assesses the degree to which strategies are appropriate and reasonable. Discussions of external factors and how different governing tools (for example, intergovernmental partnerships, performance-based contracts, financial credits) will be, or can be, used in achieving goals could further enhance the plans. Such discussions could also assist in the development of a base of governmentwide information on the strengths and weaknesses of various tools in addressing differing public policy issues. Showing performance consequences of budget decisions. The Results Act was intended to help Congress develop a clearer understanding of what is being achieved in relation to the money being spent. In the fiscal year 1999 performance plans, agencies generally covered all of the program activities in their budget requests. However, most plans did not clearly convey the requested funding level associated with achieving a discrete set of performance goals and clearly identify where that funding was included in the structure of agencies’ budget requests. Agencies, OMB, and Congress can take advantage of three initiatives—budget and program activity changes, the implementation of cost accounting, and the initiation of performance budgeting pilots—to help ensure that performance plans better convey the performance consequences of budget decisions. Congress and OMB have clearly expressed a willingness to consider changes in agencies’ budget account and/or program activity structures in future years to more clearly and readily relate expected performance to funding requests. For example, in its fiscal year 1998 appropriations reports, the House Appropriations Committee stated that it would consider any requests for program activity changes that ensure that budget submissions display amounts requested against program activity structures for which annual performance goals and measures have been established. Similarly, OMB’s Circular A-11 encouraged agencies to consider proposing changes to the budget account structure to facilitate an understanding of performance. In addition to more closely linking expected performance to agency budget requests, Congress, in crafting the Results Act expected that agencies, whenever possible, would develop performance measures that correlated the level of program activity with program costs, such as costs per unit of result, costs per unit of service, or costs per unit of output. Agencies were expected to assign a high priority to developing these types of unit cost measures. The successful implementation of the managerial cost accounting standards recommended by the Federal Accounting Standards Advisory Board and issued by OMB and GAO are vital to providing agencies the program cost information needed to develop such performance measures. The Results Act also demonstrates Congress’ interest in determining the extent to which performance can be related to changes in funding levels. The Act requires the Director of OMB to designate at least five federal agencies to participate in a 2-year pilot in performance budgeting in which the budgets of those agencies will display varying levels of performance that would result from different budgeted amounts for one or more of an agency’s major functions or operations. Agencies’ progress in establishing reliable cost accounting systems and allocating resources to performance goals, as well as progress in defining goals and measuring performance, may affect how OMB designs and determines participation in the performance budgeting pilots. More broadly, as agencies continue to define relationships between performance planning and budget structures, Congress, OMB, and agencies can explore whether changes in budget presentations can provide agencies with needed flexibility and accountability while ensuring appropriate congressional oversight and control. Building capacity within agencies to gather and use performance information. Our work suggests that few agencies have adequate procedures in place to ensure that the performance data generated will be of sufficient quality to confidently make decisions. The financial audits under the CFO Act—where only 10 of the 24 CFO Act agencies have been able to obtain an unqualified opinion from independent auditors—have shown how far most agencies have to go to be able to generate reliable year-end financial information. It is important that agencies continue to make progress on developing financial systems that can produce timely financial information throughout the year and work with their IGs to explore how the IGs can contribute to improving the credibility of performance data. Moreover, our recent work continues to show that many agencies are not well-positioned to undertake the program evaluations that will be critical to identifying why goals are not met and determining the best improvement strategies. The relatively limited level of agencies’ evaluation capabilities suggests that evaluation resources will need to be carefully targeted and coordinated with nonfederal evaluation efforts to ensure that key questions about program results are adequately addressed. In our Executive Guide, we noted that leading results-oriented organizations consistently strive to ensure that their day-to-day activities support their organizational missions and move them closer to accomplishing their strategic goals. We reported that in practice, these organizations see the production of a strategic plan—that is, a particular document issued on a particular day—as one of the least important parts of the planning process. Annual performance plans should be viewed the same way. The performance improvements expected under the Results Act will not occur because an agency has issued strategic and annual performance plans. Rather, performance improvements occur when agency managers and external decisionmakers use those documents and the planning and management processes that underpin them. Because fiscal year 1999 is to mark the first year of governmentwide implementation of the Results Act’s annual performance planning requirements, Congress and the agencies lack a common base of experience for how the performance-based approach to management envisioned under the Results Act can best be used to support congressional and executive branch decisionmaking. Building this base of experience will require ensuring that performance-based management is integrated into the way programs are managed and decisions are made. The importance of building this base of experience also suggests that any successful effort to improve the usefulness of agency performance plans will require the active partnership of Congress, OMB, and the agencies because of the potentially broad use of such plans both within Congress and the executive branch. We found that on the whole, future annual performance plans would be more useful if they provide clearer pictures of intended performance across an agency, more fully articulate what strategies and resources will be used to achieve goals and how those strategies and resources will lead to improved performance, and provide much greater confidence that performance information will be credible and useful for decisionmaking. OMB’s efforts to identify lessons learned from the agencies’ fiscal year 1999 performance plans and its commitment in the fiscal year 1999 governmentwide performance plan to review agencies’ subsequent plans to ensure improvements are made are a first step. However, the need for progress across all agencies and the range of annual planning issues that need to be addressed underscore the scope of the effort that lies ahead and suggest that a more concerted, active, and specific improvement agenda needs to be developed and put in place. Beyond improving the quality of agencies’ written plans, experience is needed in using those plans to inform congressional and executive branch decisionmaking. In this regard, it is vital that all agencies begin implementing their fiscal year 1999 annual performance plans in October 1998 and seek to prepare improved plans for fiscal year 2000. However, the limited experience with the use of the Results Act at the federal level thus far suggests that targeting key program areas for special congressional and executive branch attention can help agencies develop a common base of experience in using Results Act principles and processes to drive performance and management improvements. A coordinated OMB, congressional, and agency effort could be helpful in three ways: First, the effort could develop a body of specific examples that demonstrate where congressional and executive branch use of the performance-based approach to management and accountability contained in the Results Act helped to inform decisionmaking. Initially, the evidence for this use will be seen in such areas as improved and more focused program management within agencies; more informed executive branch and congressional budget decisions, including the better alignment of performance planning and budgeting processes; and better information available and used as part of congressional authorization and oversight efforts. Most importantly, over time, the use of performance plans should lead to substantial improvements in program performance. Second, a common OMB, congressional, and agency focus on selected program areas also will aid in the development of a set of agreed-upon “best practices” in performance planning and the integration of results-oriented performance information into decisionmaking and management. Including programs that represent a cross-section of service delivery mechanisms will provide insights into how the various tools of government (for example, regulation, direct service, intergovernmental funding, tax expenditures, loans, or loan guarantees) can be used individually and together to address public policy issues. This will aid in building an understanding of how implementation of the Results Act may differ—such as in the nature of federal goals—depending on the specific characteristics of different tools. Third, a common focus on using the Results Act to make decisions for selected program areas can also assist in the identification of experienced-based similarities and differences in the congressional and executive branch needs for the content of annual performance plans and any changes to OMB guidance and Results Act statutory requirements that may be necessary to better meet those needs. To fully implement OMB’s commitment to evaluate its and agencies’ experience in developing the fiscal year 1999 performance plans and to improve agencies’ performance plans for the future, we recommend that the Director, OMB, implement a concrete agenda aimed at substantially enhancing the usefulness of agencies’ performance plans for congressional and executive branch decisionmaking. The five key opportunities for improvement that we identified—better articulating a results orientation; coordinating crosscutting programs; clearly showing how strategies will be used to achieve goals; showing performance consequences of budget decisions; and building capacity within agencies to gather and use performance information, including program evaluation—can serve as core elements of the improvement effort. For example, OMB could work with agencies to ensure that annual performance plans include presentations that more directly explain how agency programs and initiatives will be achieved. Similarly, discussions of external factors and how different governing tools (e.g., intergovernmental partnerships, performance-based contracts, financial credits) will be, or can be, used in achieving goals would help enhance the usefulness of agencies’ plans. To go beyond the formal requirements of the Results Act to issue annual performance plans and performance reports, and to build a base of experience for how the performance-based approach to management envisioned under the Results Act can be used to improve program results and support congressional and executive branch decisionmaking, we also recommend that the Director of OMB work with Congress and the agencies to identify specific program areas that can be used as best practices. This would help demonstrate the use and benefits of results-oriented management and where concrete information about program results contributes directly to executive branch and congressional decisionmaking. This effort could also assist Congress in identifying and considering opportunities to merge and expand results-oriented performance information into existing authorization, oversight, and appropriations processes. For the effort to be most effective, several criteria should be used to identify specific program areas, such as those program areas where agreement exists between Congress and the administration that the areas likely will be on legislative and oversight agendas; those programs that have the most direct influence on meeting the central Results Act purpose of improving citizens’ confidence in government; and programs needing priority management attention, such as those listed in the fiscal year 1999 governmentwide performance plan. On July 17, 1998, we provided a draft of this report to the Director of OMB for comment. We did not provide a draft to individual agencies discussed in this report because the reports we prepared on individual agency plans in response to your request were provided to the relevant agencies for comment. Those comments were reflected, as appropriate, in the final versions of those reports. On August 19, 1998, we received OMB’s written comments; see appendix IV for a copy of the letter from the Acting Deputy Director for Management. On August 20, 1998, we received additional technical comments from a senior OMB official; we have incorporated these comments where appropriate. OMB’s August 19, 1998, letter includes comments on both this report and our companion report on the fiscal year 1999 governmentwide performance plans. Our evaluation of OMB comments on this report are provided below; OMB’s comments on our assessment of the governmentwide performance plan are discussed in our companion report. OMB generally agreed with our observations and said that the report was an expansive portrait of the fiscal year 1999 plans that contained many useful suggestions. OMB did, however, raise two related issues about the report. OMB commented that the report predominantly focuses on what was included or lacking in the annual performance plans rather than on how the plans would be used to provide better services and products to the American public and to improve the quality and nature of programming, funding, and management decisions made within the executive branch and by Congress. OMB also commented that the report does not clearly distinguish between major and secondary elements of an annual plan, such as between factors that are and are not required by statute. We agree with OMB that the use of annual performance plans by congressional and executive branch decisionmakers is the essential indicator of the effective implementation of the Results Act. As we said in our report, performance improvements will not occur because an agency issues a plan, but rather when plans—and the planning and management processes that underpin them—are used by agency managers and other decisionmakers. However, as our report shows, significant improvements are needed in agencies’ plans before they will be useful in a significant way to congressional and executive branch decisionmakers. Thus, we reported on the presence or absence of characteristics that affect the usefulness of annual performance plans and identified the key opportunities for improving those plans. Similarly, because we focused our review on the elements that are most important to developing useful plans, we disagree with OMB’s comment that we did not distinguish between major and secondary elements of performance plans. Moreover, as we note in the report, the major elements of our evaluation—goals and performance measures, program strategies, and the existence of valid performance data—are based on specific criteria set forth in the Results Act, the Act’s legislative history, or OMB’s guidance to agencies on preparing performance plans. We are sending copies of this report to the Minority Leader of the House; the Ranking Minority Members of your Committees; Committee Chairmen who requested our review of the fiscal year 1999 governmentwide annual plan and the Ranking Minority Members of their respective Committees; other appropriate congressional committees; and the Director, Office of Management and Budget. We will also make copies available to others on request. The major contributors to this report are listed in appendix V. Please contact L. Nye Stevens on (202) 512-8676 or Paul L. Posner on (202) 512-9573 if you or your staff have any questions. To summarize our observations on agencies’ fiscal year 1999 annual performance plans and to help us identify opportunities for agencies to improve future performance plans, we analyzed the information contained in our reviews of the annual performance plans of the 24 CFO Act agencies. Our review of each of the agencies’ performance plans and our summary analysis of all 24 plans were based on our guides for congressional and evaluator review of annual performance plans. For purposes of assessing the plans, we collapsed the Results Act’s requirements for annual performance plans into the three core questions that structure those guides. The three questions are: To what extent does the agency’s performance plan provide a clear picture of intended performance across the agency? How well does the performance plan discuss the strategies and resources the agency will use to achieve its performance goals? To what extent does the agency’s performance plan provide confidence that its performance information will be credible? We used the questions and associated issues contained in the guides to help us identify strengths and weaknesses in the performance plans, with a particular focus on assessing the overall usefulness of the plans for congressional and other decisionmakers. In doing our summary analysis, we examined and classified our reviews of the individual agency plans as related to the questions, issues, and criteria in the guides to discern any themes or trends and then to develop an overall, descriptive characterization of our observations and judgments about the agencies’ plans. We also reviewed parts of selected agencies’ annual performance plans, as needed, to supplement our analysis of our individual agency reviews and to elaborate further on particular issues. To further help us identify opportunities for agencies to improve future performance plans, we also drew on other related work, including our recent reports on Results Act implementation. We reviewed agency performance plans from February through June 1998 and did our work according to generally accepted government auditing standards. We requested comments from the Director of OMB on a draft of this report. OMB’s comments are discussed in the “Agency Comments and Our Evaluation” section in this report. In addition, we provided drafts of our individual reviews on agencies’ plans to the relevant agencies for comment. The agencies’ comments are reflected, as appropriate, in our products on their respective plans. Results Act: Observations on the U.S. Department of Agriculture’s Annual Performance Plan for Fiscal Year 1999 (GAO/RCED-98R, June 11, 1998). Results Act: Observations on the Department of Commerce’s Annual Performance Plan for Fiscal Year 1999 (GAO/GGD-98-135R, June 24, 1998). Results Act: DOD’s Annual Performance Plan for Fiscal Year 1999 (GAO/NSIAD-98-188R, June 5, 1998). The Results Act: Observations on the Department of Education’s Fiscal Year 1999 Annual Performance Plan (GAO/HEHS-98-172R, June 8, 1998). Results Act: Observations on DOE’s Annual Performance Plan for Fiscal Year 1999 (GAO/RCED-98-194R, May 28, 1998). The Results Act: Observations on the Department of Health and Human Services’ Fiscal Year 1999 Annual Performance Plan (GAO/HEHS-98-180R, June 17, 1998). Results Act: Observations on the Department of Housing and Urban Development’s Fiscal Year 1999 Annual Performance Plan (GAO/RCED-98-159R, June 5, 1998). Results Act: Department of the Interior’s Annual Performance Plan for Fiscal Year 1999 (GAO/RCED-98-206R, May 28, 1998). Observations on the Department of Justice’s Fiscal Year 1999 Performance Plan (GAO/GGD-98-134R, May 29, 1998). Results Act: Observations on Labor’s Fiscal Year 1999 Performance Plan (GAO/HEHS-98-175R, June 4, 1998). The Results Act: Observations on the Department of State’s Fiscal Year 1999 Annual Performance Plan (GAO/NSIAD-98-210R, June 17, 1998). Results Act: Observations on the Department of Transportation’s Annual Performance Plan for Fiscal Year 1999 (GAO/RCED-98-180R, May 12, 1998). Results Act: Observations on Treasury’s Fiscal Year 1999 Annual Performance Plan (GAO/GGD-98-149, June 30, 1998). Results Act: Observations on VA’s Fiscal Year 1999 Performance Plan (GAO/HEHS-98-181R, June 10, 1998). Results Act: EPA’s Annual Performance Plan for Fiscal Year 1999 (GAO/RCED-98-166R, Apr. 28, 1998). Results Act: Observations on the Federal Emergency Management Agency’s Fiscal Year 1999 Annual Performance Plan (GAO/RCED-98-207R, June 1, 1998). Results Act: Observations on the General Services Administration’s Annual Performance Plan (GAO/GGD-98-110, May 11, 1998). Managing for Results: Observations on NASA’s Fiscal Year 1999 Performance Plan (GAO/NSIAD-98-181, June 5, 1998). Results Act: NSF’s Annual Performance Plan for Fiscal Year 1999 (GAO/RCED-98-192R, May 19, 1998). Results Act: NRC’s Annual Performance Plan for Fiscal Year 1999 (GAO/RCED-98-195R, May 27, 1998). Results Act: Observations on the Office of Personnel Management’s Annual Performance Plan (GAO/GGD-98-130, July 28, 1998). Results Act: Observations on the Small Business Administration’s Fiscal Year 1999 Annual Performance Plan (GAO/RCED-98-200R, May 28, 1998). The Results Act: Observations on the Social Security Administration’s Fiscal Year 1999 Performance Plan (GAO/HEHS-98-178R, June 9, 1998). The Results Act: Observations on USAID’s Fiscal Year 1999 Annual Performance Plan (GAO/NSIAD-98-194R, June 25, 1998). The examples used in this report are drawn from the assessments of the individual agency annual performance plans that were done by staff across GAO. Thus, in addition to the individuals noted above, the staff who worked on the individual agency plan assessments also made important contributions to this report. These individuals are identified in the separate reports on agency plans. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO summarized its reviews of individual federal agency performance plans, focusing on opportunities to improve the usefulness of future performance plans for decisionmakers. GAO noted that: (1) the agencies' first annual performance plans showed the potential for doing performance planning and measurement as envisioned by the Government Performance and Results Act to provide decisionmakers with valuable perspective and useful information for improving program performance; (2) however, overall, substantial further development is needed for these plans to be useful in a significant way to congressional and other decisionmakers; (3) most of the plans that GAO reviewed contained major weaknesses that undermined their usefulness in that they: (a) did not consistently provide clear pictures of agencies' intended performance, (b) generally did not relate strategies and resources to performance; and (c) provided limited confidence that agencies' performance data will be sufficiently credible; (4) GAO believes that Congress, the Office of Management and Budget (OMB), and the agencies need to build on the experiences of the first round of annual performance planning by working together and targeting key performance issues that will help to make future plans more useful; (5) most of the performance plans had at least some objective, quantifiable, and measurable goals, but few plans consistently included a comprehensive set of goals that focused on the results that programs were intended to achieve; (6) the plans generally did not go further to describe how agencies expected to coordinate their efforts with those of other agencies; (7) most agencies' performance plans did not provide clear strategies that described how performance goals would be achieved; (8) the performance plans generally provided listings of the agencies' current array of programs and initiatives but provided limited perspective on how these programs and initiatives were necessary or helpful for achieving results; (9) most of the plans did not adequately describe the resources needed to achieve their agencies' performance goals; (10) most annual performance plans provided only superficial descriptions of procedures that agencies intended to use to verify and validate performance data; and (11) the absence of program evaluation capacity is a major concern, because a federal environment that focuses on results depends on program evaluation to provide vital information about the contribution of the federal effort.
In 1983, Congress passed the Radio Broadcasting to Cuba Act to provide the people of Cuba, through Radio Martí, with information they would not ordinarily receive due to the censorship practices of the Cuban government. Subsequently, in 1990, Congress authorized U.S. television broadcasting to Cuba. The objectives of Radio and TV Martí are to (1) support the right of the Cuban people to seek, receive, and impart information and ideas through any media and regardless of frontiers; (2) be effective in furthering the open communication of information and ideas through the use of radio and television broadcasting to Cuba; (3) serve as a consistently reliable and authoritative source of accurate, objective, and comprehensive news; and (4) provide news, commentary, and other information about events in Cuba and elsewhere to promote the cause of freedom in Cuba. OCB is a federal entity and is a part of BBG, which is an independent federal agency responsible for overseeing all U.S. government-sponsored, nonmilitary, international broadcasting programs. In addition to OCB, BBG also oversees the operations of IBB, which in turn oversees Voice of America (VOA). BBG also provides funding and oversight to three independent grantees: Middle East Broadcasting Networks, Inc.; Radio Free Europe/Radio Liberty; and Radio Free Asia (see table 1). In October 2003, the President established the Commission for Assistance to a Free Cuba (CAFC) to identify measures to help bring about an end to the Castro government and support U.S. programs that could assist in an ensuing transition. This commission published two interagency policy frameworks—the 2004 and 2006 Commission for Assistance to Free Cuba reports—which identify measures to (1) empower Cuban civil society, (2) break the Cuban government’s information blockade, (3) deny resources to the Cuban dictatorship, (4) illuminate the reality of Castro’s Cuba, (5) encourage international efforts to support Cuban civil society, and (6) undermine the regime’s “succession strategy.” The CAFC reports make recommendations in a variety of areas, including measures to intensify efforts to break the Cuban government’s information blockade, such as utilizing new methods to broadcast TV Martí. These reports also indicate that Radio and TV Martí are vehicles for facilitating the transition to democracy in Cuba, supporting Cuban democratic opposition, and empowering Cuban civil society. In addition, State and OCB officials indicate that Radio and TV Martí will be important platforms for providing information to Cubans during any future government transition. OCB’s role is to provide Cuba with the Spanish-language programming that one could access in an open society, including news and entertainment. In 2004, Radio Martí changed its programming from entertainment and news to an all-news format, and currently broadcasts news and information programming 6 days a week, 24 hours per day, and 1 day per week for 18 hours. Radio Martí’s daily programming consists of 70 percent live news broadcasts, and 30 percent recorded programming with the ability to go live as needed. TV Martí broadcasts news (including two live newscasts), sports and entertainment, and special programming. OCB has 167 authorized direct-hire positions and approximately 120 talent contractors. OCB’s fiscal year 2008 budget was approximately $34 million, including about $18 million for salaries, $7 million for general operating expenses, and almost $9 million for transmissions. Figure 1 shows a breakdown of OCB’s budget. OCB broadcasts Radio and TV Martí to Cuba through multiple transmission delivery methods to overcome the Cuban government’s jamming of certain signals, with a recent focus on providing more of its resources for TV transmissions. Due to the U.S. government’s lack of access to Cuba, OCB has difficulty in obtaining nationally representative data on its audience size. The best available research (from IBB telephone surveys) indicates that Radio and TV Martí’s audience size is small, due in part to signal jamming by the Cuban government. IBB and OCB have made some efforts to gain information on the extent and impact of jamming; however, they still lack data on the number, type, and effectiveness of the jammers. In addition, Radio and TV Martí broadcasts face the challenge of competition from domestic and international media, which OCB could do more to address. Furthermore, coordination with other relevant U.S. agencies to share audience research to Cuba is minimal. Finally, OCB has conducted some strategic planning exercises, but lacks a strategic plan that BBG has approved. OCB broadcasts Radio and TV Martí through multiple transmission delivery methods in an effort to overcome the Cuban government’s attempt to block, or jam, these broadcasts, thereby preventing them from reaching a Cuban audience. OCB broadcasts radio through shortwave, AM, two subchannels on Hispasat satellite television, and the Internet. Figure 2 shows the cost, broadcast schedule, and projected coverage (in the absence of Cuban jamming or counter-broadcasting) of Radio Martí. OCB broadcasts TV Martí through satellite television (Hispasat and DirecTV), an over-the-air transmission via an airplane (AeroMartí), and the Internet. Figure 3 shows the cost, broadcast schedule, and projected coverage (in the absence of Cuban jamming) of TV Martí. Over the past 3 years, OCB added more transmission delivery methods and devoted more resources for TV Martí than for Radio Martí (see fig. 4). The 2004 and 2006 CAFC reports recommended that OCB explore additional transmission methods, including the use of airborne platforms and satellite television, to further efforts to break the information blockade in Cuba. In October 2006, OCB launched AeroMartí, which consists of two Gulfstream propeller airplanes that OCB leases to broadcast television signals to Cuba. In December 2006, IBB leased airtime on TV Azteca, a commercial television station in Miami that is carried on the DirecTV satellite. Due in large part to the launch of AeroMartí, most of OCB’s budget for transmission costs is spent on TV Martí. In fiscal year 2008, OCB spent over $6 million on AeroMartí, which includes about $5 million for fuel, operation, and maintenance of the airplanes and about $1 million to equip one airplane with the ability to broadcast on channel 13. Additional OCB resources were focused on TV Martí transmissions because BBG and OCB felt there were more opportunities to expand the size of the audience of TV Martí than that of Radio Martí. Prior to its use of AeroMartí, OCB transmitted TV Martí through an aerostat (blimp) in the Florida Keys. The aerostat was destroyed by a hurricane in 2005. BBG, IBB, and OCB officials believe that AeroMartí is more effective than the aerostat due to its technological capabilities. In December 2006, IBB began leasing 1 hour of airtime from 12:00 midnight to 1:00 a.m. on weeknights on a commercial AM radio station in Miami (Radio Mambi), at a cost of about $183,000 for a 6-month period. However, due to budget constraints, IBB canceled its contract with this station in February 2008. In addition to investing in new transmission methods for TV Martí, OCB has taken steps to improve the production quality of its television programming. For example, instead of broadcasting taped newscasts, in October 2006, OCB began airing a live news broadcast at 6:00 p.m., with updates at 10:00 p.m. According to IBB officials, the production quality of TV Martí programming has also improved through OCB’s use of more original programming, well-designed graphics, and upgraded sets. In anticipation of greater Internet availability and use in Cuba, OCB’s Director said that OCB is beginning to focus more attention on improving its Web site. For example, OCB officials said they are in the process of redesigning OCB’s Web site and have trained staff on digital journalism. However, Cubans’ ownership of personal computers is limited, and the Cuban government tightly restricts Internet access to Cubans. According to OCB officials, some Cubans access OCB’s Web site using foreign Internet service providers, and, as a result, OCB is unable to determine the number of hits on its Web site that originate from Cuba. BBG; IBB; OCB; and U.S. Interests Section, Havana (USINT) officials emphasized that they face significant challenges in conducting valid audience research due to the closed nature of Cuban society. For example, U.S. government officials stationed in Havana are prohibited by the Cuban government from traveling outside of Havana. Also, IBB researchers believe that the Cuban government would not permit U.S. government- funded organizations to conduct audience research on Radio and TV Martí in Cuba. According to State, it is difficult to travel to Cuba for the purpose of conducting audience research. In addition, the Department of the Treasury (Treasury) prohibits BBG from conducting in-person audience research surveys in Cuba. BBG also notes that the threat of Cuban government surveillance and reprisals for interviewers and respondents raise concerns, such as respondents’ willingness to answer sensitive questions frankly. Despite these limitations, IBB, OCB, and USINT conduct a variety of research efforts to obtain information on Radio and TV Martí’s audience size, characteristics, reaction to programming, and preferences. To measure audience size, IBB periodically commissions international telephone surveys. IBB also periodically commissions monitoring panels and focus groups in Miami with recent Cuban arrivals to the United States to solicit their feedback on the content and production quality of OCB programming and to obtain information about their radio and television use, preferences, and experiences in Cuba. OCB contracts with a local Miami market research firm that conducts monitoring panels once a month and conducts surveys twice a year to solicit recent Cuban arrivals’ feedback on the quality of TV Martí programming and to obtain information about their media habits and perceptions of Radio and TV Martí programming. In addition, USINT has occasionally administered informal surveys of Cubans visiting USINT, which asked, among other things, whether visitors listened to and watched Radio and TV Martí. BBG, IBB, and OCB officials indicated that research on Radio and TV Martí’s audience size faces significant limitations, such as none of these data are representative of the entire Cuban population. IBB’s telephone surveys are IBB’s only random data collection effort in Cuba, but these data might not be representative of Cubans’ media habits for two main reasons: (1) Only adults in homes with published telephone numbers are surveyed, and, according to BBG documents, approximately 17 percent of Cuban adults live in households with published telephone numbers; and (2) BBG and OCB officials noted that, because individuals in Cuba are discouraged or prohibited by their government from listening to and watching U.S. international broadcasts, they might be fearful of responding to media surveys and disclosing their media habits, and thus actual audience size might be larger than survey results. The various research efforts that IBB, OCB, and USINT have undertaken provide decisionmakers with limited information to help assess the relative success or return on investment from U.S. broadcasting to Cuba. For example, at a strategic level, documents produced as a part of BBG’s annual Language Service Review process contain data on the cost per listener. However, we found that although documents from the 2004 and 2005 Language Service Reviews of OCB included such data, documents from the 2006, 2007, and 2008 Language Service Reviews of OCB listed this information as “not available.” This is because the news and programming operations and budgets for Radio and TV Martí were merged in fiscal year 2005, thus making it impossible to separate the budgets (and, therefore, the cost per listener) for Radio and TV Martí. In addition, the research efforts provide decisionmakers with limited information on the relative return on investment from each of the individual transmission methods OCB uses. For example, the IBB telephone surveys do not include questions on the transmission method— such as shortwave or medium-wave radio, satellite television, AeroMartí, or the Internet—that respondents used to listen to or watch Radio and TV Martí. As a result, it is impossible to determine from the telephone surveys whether TV Martí’s audience is due to AeroMartí (which costs about $5.0 million annually) or the DirecTV transmission (which costs about $0.5 million annually). Furthermore, other officials have suggested that the current methods used to broadcast to Cuba may not be the most cost-effective way to reach a Cuban audience. For example, a USINT official stated that the most successful distribution of TV Martí has been via DVD (rather than satellite or over-the-air AeroMartí broadcasts) and suggested that there could be avenues for others to increase the distribution of DVDs throughout Cuban society. Despite the lack of reliable nationally representative data, BBG has determined that telephone surveys conducted from outside Cuba are among the best available and most cost-effective methods of estimating audience size for Radio and TV Martí. These surveys indicate that Radio and TV Martí’s audience size is small. Regarding radio broadcasting, less than 2 percent of respondents to IBB’s telephone surveys in 2003, 2005, and 2006 said they listened to Radio Martí during the past week. In 2008, less than 1 percent of respondents said they listened to Radio Martí during the past week. Regarding television broadcasting, IBB audience research indicates that TV Martí’s audience size is small. All of IBB’s telephone surveys since 2003 show that less than 1 percent of respondents said they watched TV Martí during the past week. Notably, results from the 2006 and 2008 telephone surveys show no increase in reported TV Martí viewership following the launch of AeroMartí and DirecTV broadcasting in 2006. Similarly, very few participants in IBB-commissioned focus groups said that they had seen TV Martí in Cuba. Despite the small number of Cubans who reported listening to or viewing Radio or TV Martí in IBB telephone surveys, OCB officials told us that other information suggests that Radio and TV Martí have a larger audience in Cuba. For example, a 2007 survey that OCB commissioned, intended to obtain information on programming preferences and media habits, also contained data on Radio and TV Martí’s audience size. While the survey was not intended to measure listening rates or project audience size, this nonrandom survey of 382 Cubans who had recently arrived in the United States found that 45 percent of respondents reported listening to Radio Martí and that 21 percent reported watching TV Martí within the last 6 months before leaving Cuba. However, these results may not represent the actual size of Radio and TV Martí’s audience because (1) according to BBG officials, higher viewing and listening rates are expected among recent arrivals and (2) the demographic characteristics of the respondents to this survey did not reflect the Cuban population in all aspects. In addition, OCB receives anecdotal information about its audience. BBG’s Executive Director said that, in the case of a closed society, such anecdotal and testimonial reports of reception are evidence that a broadcast has a significant audience. (See fig. 5 for an example of reported reception of TV Martí via AeroMartí in Cuba.) As an illustration, OCB reported that Radio Martí’s coverage of Hurricane Ike, which struck Cuba in September 2008, was widely heard in Cuba, with callers from all over Cuba providing updated information on the situation to OCB. We also reviewed letters and records of telephone calls from Cubans to OCB. Following our observation that it does not track this information systematically, OCB began doing so in August 2008. The Cuban government jams Radio Martí’s shortwave signals and interferes with Radio Martí’s AM signals by counter-broadcasting at a higher power level on the same frequency. OCB tries to overcome jamming of its shortwave signals by broadcasting on three different frequencies per hour until 12:00 midnight and on two different frequencies per hour from 12:00 midnight to 6:00 a.m., while also changing its shortwave frequencies several times throughout the day. To overcome Cuban government counter-broadcasting of its AM broadcasts, OCB increases signal power during daylight hours. According to OCB, the Cuban government’s counter-broadcasting is largely effective in and around Havana and several other large cities, but probably has little impact outside these areas. Recently arrived Cubans who participated in IBB-commissioned focus groups reported that signal jamming and counter-broadcasting by the Cuban government made it difficult for them to listen to Radio Martí. The Cuban government also jams TV Martí’s signals from AeroMartí. According to OCB engineers, the jamming attempts to disrupt the signal reaching televisions in Cuba (rather than at the transmitter). OCB engineers said that because AeroMartí’s signal is transmitted from a high- altitude, constantly moving platform, they believe jamming is less effective, but this has not been confirmed. A February 2008 OCB assessment of Cuban jamming states that “Cuba would need many thousands of additional jammers to totally block TV Martí.” However, according to IBB’s research contractor, none of the 533 respondents to IBB’s 2008 telephone survey living in Havana reported watching TV Martí broadcasts during the past 12 months. In addition, recently arrived Cubans who participated in IBB-commissioned focus groups reported that signal jamming of TV Martí’s over-the-air broadcast via AeroMartí made it difficult for them to view TV Martí. USINT officials also said that Cuban government jamming of AeroMartí prevented them from viewing over-the- air TV Martí broadcasts. In recent years, IBB and OCB have attempted to better understand and quantify the extent of Cuban jamming and its impact on the technical reception of Radio and TV Martí broadcasts. Despite their efforts, IBB and OCB still lack reliable data on the number, location, type, and effectiveness of Cuban jamming equipment. As a result, it is unclear how much of the radio and television signals can be heard and seen in Cuba. For example, OCB recently asked AeroMartí’s contractor to study AeroMartí’s capabilities and effectiveness in the presence and absence of jamming. The contractor developed a model and estimated that AeroMartí’s broadcasts had a potential viewing audience of about 40 percent of the Cuban population in the absence of jamming and at least 20 percent of the population in the presence of Cuban jamming. This estimate, however, assumed that the Cuban government uses four jammers in fixed locations in the Havana area. OCB’s Director of Engineering said that the assumption that Cuba has four fixed jammers is based on observations made in the 1990s by a USINT public affairs officer and defecting Cuban jamming technicians. Given the dated nature of the assumption, the estimates regarding AeroMartí’s potential viewing audience might be unreliable, and, therefore, the validity of the study’s conclusions is uncertain. The contractor’s study also does not address or account for other potential variables, including jamming outside of the Havana area or the effect of mobile jammers on AeroMartí broadcasts. In addition, according to OCB officials, Hurricane Ike may have reduced Cuba’s jamming capabilities. In addition, IBB Office of Engineering officials said that they have provided equipment to monitor the quality of Radio and TV Martí’s technical reception in Cuba. According to an IBB Office of Engineering official, these systems are not yet operational due to technical problems and other State priorities. Once operational, the equipment will provide IBB (and others, through a public Web site) with access to the Radio and TV Martí signal received in Cuba. IBB will be able to listen to and view OCB broadcasts and analyze when, how often, and to what extent broadcasts are jammed or interfered with. Officials noted that a major limitation of the systems is that they would only provide data on the quality of technical reception in at the location where the equipment is operating. OCB’s Director emphasized that the competitive media environment in Cuba is a key challenge for OCB in attracting and maintaining an audience for Radio and TV Martí. To identify what Cuban media are reporting and to understand the situation in Cuba, OCB staff monitor Cuban government broadcasts. In addition, IBB and OCB surveys and focus groups provide some information regarding competing stations. Recent IBB- commissioned telephone surveys indicate that Radio and TV Martí broadcasts face competition from Cuban and international broadcasters. For example, about 60 to 70 percent of respondents in the 2006 telephone survey reported listening to three national Cuban radio stations during the past week. IBB and OCB senior officials said that Cuban radio attracts listeners because of its high-quality music programming. The 2006 telephone survey results indicate that Radio Martí and Radio Exterior de España (Spain’s foreign radio) have the largest audience among international radio broadcasters to Cuba, with similar past week listenership rates of about 1 percent. In recent years, over 90 percent of telephone survey respondents said they watched Cuba’s national television broadcasts during the past week. IBB and OCB officials said that the quality of Cuban television programming has recently improved and includes popular U.S. programming (such as The Sopranos and Grey’s Anatomy). Telephone surveys indicate that TV Martí has a smaller audience than other international television broadcasts. For example, about 30 percent of respondents in 2005 and 2006 said they watched CNN during the past week. Telemundo’s and Univision’s (which are broadcast only on satellite television) past week viewership rates in 2006 were about 3 percent, while TV Martí’s was less than 1 percent. According to IBB research, international radio and television broadcasts, including VOA broadcasts to Cuba, are not jammed at all or not as heavily jammed as Radio and TV Martí. While OCB and IBB have gathered information relating to OCB’s competitors, OCB has not compiled comprehensive information regarding the number, nature, and quality of other radio and television programming available to Cuban listeners and viewers. We have previously reported on how assessments of broadcasting competitors can be used in the strategic planning process to improve operations. For example, we reported that the Middle East Broadcasting Networks conducts ongoing assessments of its competitors and uses this information to make adjustments to its programming. IBB officials said that IBB does not have the resources to catalog all of the different types of programming available to Cubans. BBG staff are responsible for coordinating with other agencies—such as State and the U.S. Agency for International Development—that are involved in efforts to provide uncensored information to Cuba. However, BBG coordination with other, relevant U.S. agencies regarding audience research is minimal. The 2006 CAFC report recommended the establishment of quarterly meetings of the appropriate U.S. government agencies to coordinate strategy on broadcasting and communications to Cuba. BBG officials reported that they have participated in significant coordination activities regarding U.S. policy toward Cuba. For example, BBG’s Executive Director reported attending seven high-level interagency meetings on Cuba in 2008. However, such coordination has not consistently occurred on a quarterly basis and does not address operational challenges, such as the lack of audience research data or data on Cuba’s jamming capabilities. We found several examples of ways in which additional coordination could have enhanced OCB’s understanding of its Cuban audience. For example: OCB and the U.S. Agency for International Development and State grantees do not regularly share relevant audience research with each other. For example, State’s Bureau of Democracy, Human Rights, and Labor provides a $700,000 grant to a nongovernmental organization near Miami that also broadcasts radio programming to Cuba 7 days per week. While OCB and the nongovernmental organization have shared some program content and coordinated with some of the same independent journalists in Cuba, OCB was unaware of a significant amount of audience research that it has gathered. For example, the director of this nongovernmental organization reported that in 2007 it made international telephone calls to 35,000 Cubans to obtain information about their media preferences. The director said his organization would be willing to share the audience research with OCB. BBG and IBB officials were unaware of this organization’s broadcasting efforts or its audience research activities. OCB and USINT conduct separate audience research activities and do not always share relevant research data with one another. For example, USINT recently administered a survey that included data on Radio and TV Martí’s audience reach; however, OCB was unaware of these data. Despite several significant changes in OCB’s operations, such as additional transmission methods, OCB lacks a formal strategic plan approved by BBG to guide such decision making. Strategic planning, including the development of a strategic plan, is a good management practice for all organizations. A strategic plan serves the purposes of articulating the fundamental mission of an organization and laying out the long-term goals for implementing that mission, including the resources needed to achieve those goals. We have reported that organizations should make management decisions in the context of a strategic plan, with clearly articulated goals and objectives that identify resource issues and internal and external threats, or challenges, that could impede the organization from efficiently and effectively accomplishing its objectives. Additionally, Office of Management and Budget guidance suggests that strategies state the organization’s long-term goals and objectives; define approaches or strategies to achieve goals and objectives; and identify the various resources needed and the key factors, risks, or challenges that could significantly affect the achievement of the strategic goals. A June 2007 State OIG inspection of OCB recommended that OCB prepare a long-term strategic plan, including contingency planning for a time when uncensored broadcasts are allowed in Cuba. This recommendation has not yet been fully implemented. OCB developed a draft strategic plan with assistance from BBG staff and submitted its draft strategic plan to BBG in July 2007. BBG management said the plan that OCB submitted was more of a crisis broadcasting plan than a strategic plan, and asked OCB to resubmit a strategic plan that was not predicated on Fidel Castro’s death, but rather laid out a longer-term vision for OCB operations. At the end of 2007, BBG approved and made publicly available its BBG-wide strategic plan for 2008-2013. According to BBG staff, the Board of Governors then directed BBG staff to work with BBG’s broadcast entities to ensure that their individual strategic plans were in line with BBG’s strategic plan. OCB subsequently resubmitted its strategic plan to IBB for review and approval. IBB management is currently reviewing the plan. In October 2008, an IBB official and a BBG official suggested that it might take an additional 3 to 6 months for the board to review and approve OCB’s draft strategic plan. Without a formal, approved strategic plan, BBG and OCB lack an agreed- upon approach to guide such decision making regarding OCB funding and operations. IBB’s annual program review process is the main mechanism used to assess Radio and TV Martí broadcasts’ compliance with VOA journalistic standards. IBB’s analyses and external reviews of broadcast content frequently identified problems with the broadcasts’ adherence to journalistic standards such as balance and objectivity. IBB has consistently made recommendations to OCB to improve its adherence to certain aspects of journalistic standards; however, OCB has not ensured the full implementation of IBB program review recommendations. While this process provides some useful information, we identified several weaknesses in the process. The Radio Broadcasting to Cuba Act and the TV Broadcasting to Cuba Act require Radio Martí and TV Martí, respectively, to adhere to VOA journalistic standards to ensure that their programming is accurate, objective, and balanced and presents a variety of views. VOA journalistic standards are set out in the VOA Charter and the VOA Programming Handbook. In addition to the VOA Charter, OCB has its own set of editorial guidelines that establish OCB’s policy on radio and television broadcasts to Cuba, and that are intended to assist broadcast personnel in making day-to-day editorial decisions. The editorial guidelines provide guidance on how to ensure balance, proper sourcing, and proper tone in broadcasts. The guidelines also discuss several proscribed actions in broadcasts, such as the insertion of personal opinion, use of broad generalizations, reporting of unsubstantiated information, and incitement to revolt or other violence. The main mechanism for assessing broadcasts’ compliance with journalistic standards is IBB’s program review process, which is designed to improve the content and production value of programming and ensure quality control. IBB officials told us that this is intended to be an iterative process for identifying areas for improvement focused on continuous improvement from year to year, with the broadcast entity having primary responsibility for making such improvements. IBB’s Office of Performance Review is responsible for managing the program review process. It conducts annual reviews of VOA’s 45 language services and OCB’s Radio and TV Martí broadcasts. Office of Performance Review program analysts and external reviewers assess the content and production quality against a standard set of criteria. IBB program analysts write reviews assessing broadcast content and production quality. IBB program review coordinators and OCB management then discuss these inputs at a program review meeting at OCB. Within 2 weeks after the program review meeting, IBB’s Office of Performance Review staff directs the formulation of an action plan with suggestions and recommendations for improvement for OCB. The action items are intended to be the result of consensus between IBB and OCB. There is a 3-month follow-up period after the program review meeting during which IBB Office of Performance Review program analysts monitor OCB’s implementation of the action plan. IBB and OCB then hold a follow-up meeting to discuss OCB’s implementation of the action plan. IBB also assigns performance scores (on a scale of 0 to 4) for each of the individual content and production criterion. The scores from IBB’s content and production reviews are then combined with the scores assigned by external experts and monitoring panels of people from the target audience to develop an overall performance score. While IBB officials report that the quality of OCB programming has improved in recent years, IBB’s internal as well as external reviews identified problems with OCB broadcasts’ adherence to certain journalistic standards, particularly in the area of balance and objectivity. IBB program analysts’ reviews from 2003 through 2008 repeatedly cite several, specific problems with the broadcasts, such as the presentation of individual views as news, editorializing, and the use of inappropriate guests whose viewpoints represented a narrow segment of opinion. IBB reviews of Radio and TV Martí’s content identified other problems, including placement of unsubstantiated reports coming from Cuba with news stories that had been verified by at least two reputable sources; the use of offensive and incendiary language in broadcasts, which is explicitly prohibited by OCB’s editorial guidelines; and a lack of timeliness in news and current affairs reporting. External reviews of Radio and TV Martí’s broadcast content also identified problems regarding the broadcast’s adherence to certain journalistic standards, particularly balance and objectivity. For example, the results of IBB monitoring panels from 2003 through 2007 showed that the majority (9 of 13) of expert control listeners and viewers, as well as approximately one-third (16 of 49) of recent Cuban arrival panelists, expressed concerns about the broadcasts’ balance and objectivity. In addition, an OCB- commissioned survey of recent Cuban arrivals in 2007 showed that 38 percent felt that TV Martí programming was “objective,” and 13 percent felt the programming was “biased.” Furthermore, 29 percent of respondents believed that Radio Martí’s news was “objective,” and 18 percent felt the news broadcasts were “exaggerated.” To help improve adherence to journalistic standards, in 2007, the Director of OCB issued a memorandum to managers requiring them to certify that they have provided employees and contractors with a copy of both OCB’s editorial guidelines and the VOA Charter. OCB has also taken recent steps to improve training for OCB employees that could, over time, address concerns regarding adherence to journalistic standards. For example, OCB has selected a staff person to serve as a training coordinator and established a designated space for training classes. However, BBG’s Manual of Administration establishes additional responsibilities for providing training that OCB has not yet fulfilled. For example, while the manual requires managers to review employees’ training needs annually, OCB officials reported that they have made no recent efforts to identify staff training needs. Although there has been recent training related to writing for the Internet, over the past 5 years, OCB has provided little training to its broadcasting staff on how to comply with journalistic standards. OCB management has acknowledged the importance of training staff, but stated that budget limitations in recent years have precluded such training. Action plans that IBB program review coordinators and OCB management have developed consistently recommended that OCB address problems regarding its adherence to certain journalistic standards; however, OCB has not ensured the implementation of some IBB program review recommendations. For example, IBB action plans from 2003 through 2008 recommended that OCB separate news from opinion in broadcasts, ensure balanced and comprehensive selection of viewpoints, avoid sweeping generalizations and editorializing, use guests who are informed on program topics, and separate unsubstantiated reports from Cuba from newscasts. Senior officials in IBB’s Office of Performance Review said that OCB management is to decide how to handle the recommendations, and noted that the current OCB management has been more responsive to IBB program review recommendations than previous OCB management. In response to a recommendation by the State OIG regarding the lack of implementation of some program review recommendations, BBG agreed to develop a process to help ensure additional oversight of the implementation of such recommendations. Specifically, BBG agreed that the Office of Performance Review should make quarterly reports to the Deputy Director of IBB regarding the most significant outstanding action items. OCB senior managers acknowledged that IBB’s action plans make some of the same recommendations from year to year, and that OCB has not implemented all of the IBB recommendations. For example, OCB senior officials acknowledged that, on occasion, newscasters insert their opinions into newscasts, but said that this is difficult to prevent during live newscasts. We observed that, in cases in which OCB management agreed with IBB program review recommendations, OCB attempted to address specific examples of noncompliance cited in IBB’s report, but did not address the broader factors underlying its lack of adherence to journalistic standards. An OCB senior official also said that OCB does not implement certain program review recommendations when it disagrees with IBB over the substance of the criticism. We observed the meetings held between IBB and OCB officials to discuss the results of IBB’s reviews of Radio and TV Martí in June and September 2008, and found that this process provides useful information for OCB regarding the content and production quality of its broadcasts. For example, we observed that during the September 2008 Radio Martí program review meeting, IBB analysts provided several specific examples of poor sound quality, editorializing, and long monologues, each of which OCB management agreed to address with relevant staff. We also found that IBB program analysts present constructive recommendations for improvement in these and other areas. However, our analysis of 5 years worth of IBB’s qualitative reviews of Radio and TV Martí’s content identified several weaknesses in the reviews: IBB content reviews of Radio and TV Martí did not clearly indicate whether the broadcasts are in full compliance with journalistic standards or the extent of compliance. These reviews frequently identified problems with the broadcasts’ adherence to certain journalistic standards, but did not attempt to indicate the severity or frequency of an identified problem with the broadcasts. When discussing a particular journalistic standard, IBB reviews sometimes cited both positive and negative examples, making it difficult to determine the reviews’ overall assessment. We also noted many instances in which the reviews did not make any overall conclusion regarding the broadcasts’ adherence to a particular journalistic standard. IBB’s qualitative reviews of the broadcasts’ content sometimes did not clearly support the quantitative score that IBB’s analysts assigned to the broadcasts for a particular journalistic standard. In some cases, IBB’s content review criticized OCB adherence to a particular journalistic standard, but provided a relatively positive quantitative score. For example, in a recent IBB review of TV Martí’s content, the review cited one negative observation regarding the broadcast’s relevance to the audience; however, the reviewer assigned TV Martí with a high score under the “relevance to audience” content criterion. In other cases, IBB’s content reviews contained both positive and negative observations, but provided a relatively negative score. IBB’s content reviews lack consistency in the ways that they are conducted and reported. For example, while the qualitative reviews state the general time period of the review, they did not specify the number of hours that the reviewer spent listening to or viewing programming or clearly indicate the programs that were listened to or viewed. Moreover, the time period varied greatly from about 1 week to 1 year. The lack of consistency in the reviews from year to year makes it difficult to systematically assess Radio and TV Martí’s content and production quality across years. BBG officials stated that the reviews are not intended for systematic comparison across years, but to evaluate program quality at a particular point in time, based on a subjectively selected sample of programming chosen by the program analyst. While IBB’s Office of Performance Review has guidance describing the purpose and steps in the program review process, there is no specific operational guidance for analysts explaining how to conduct content and production reviews. For example, IBB does not provide analysts with any guidance to help them determine how to assign a specific quantitative score on the basis of their observations of programming. BBG and IBB officials said they refer IBB analysts to the BBG’s strategic plan and OCB’s editorial guidelines for guidance. Moreover, while program analysts receive training regarding language, regional expertise, and technical production, they have received limited training regarding skills, such as program evaluation, to assist them in conducting program reviews. The Director of IBB’s Office of Performance Review said that program analysts could benefit from additional training in these areas to further enhance the quality of program reviews, but the IBB training budget is limited and priority is given to broadcasters. U.S. law generally prohibits the domestic dissemination of public diplomacy information intended for foreign audiences. Some domestic dissemination of OCB programming is authorized by law, and IBB and OCB have taken a variety of steps to minimize U.S. audiences’ access to such material. However, both Radio and TV Martí broadcasts reach U.S. audiences in several ways. In addition, some commercials shown by a Miami television station contracted to air TV Martí programming were not consistent with IBB guidance. Furthermore, the Cuban government has complained that U.S. broadcasting to Cuba violates international broadcasting standards, and the international body that serves as a forum for such disputes—the ITU—has found that U.S. television broadcasts (but not radio broadcasts) cause harmful interference with Cuban broadcasts. State indicated that no action has been taken in response to the ITU’s determinations that U.S. broadcasts cause harmful interference. Officials from State indicated that the ITU’s determinations were based on information provided solely by the Cuban government and that the United States has not independently verified that the broadcasting is causing harmful interference. Since 1948, U.S. law has prohibited the domestic dissemination of public diplomacy material intended for foreign audiences. In enacting the legislation, Congress intended, among other things, to prevent the U.S. government from engaging in domestic propaganda. However, legislation authorizing U.S. radio and television broadcasting to Cuba permits domestic dissemination of such broadcasts under certain circumstances. The Radio Broadcasting to Cuba Act directs that radio broadcasting to Cuba utilize broadcasting facilities located in Marathon, Florida, and the 1180 AM frequency, which is available to U.S. listeners. Moreover, if the broadcasts on the 1180 AM frequency are jammed by the Cuban government, the Radio Broadcasting to Cuba Act authorizes the leasing of time on other commercial or noncommercial educational AM band radio stations. Since these broadcasts originate from U.S. territory, they would be available to a domestic audience. The Television Broadcasting to Cuba Act permits some domestic dissemination of U.S. government information prepared for dissemination abroad, as long as the dissemination is “inadvertent.” While the term “inadvertent” is not defined, the statute’s legislative history indicates that under certain circumstances, some domestic reception would be unavoidable and, therefore, permitted, as long as transmission signals would not be intentionally or deliberately targeted to domestic audiences. OCB has taken a variety of steps to minimize the domestic dissemination of U.S. broadcasting to Cuba. For example, the three radio antennas used for OCB’s radio broadcasting on the 1180 AM frequency from Marathon are arrayed in a line so that the signal is directed toward Cuba and away from the United States. In addition, in deciding which local Miami television station to contract with to place TV Martí programming on DirecTV, IBB officials told us that they evaluated the geographic coverage of each station’s broadcasting, with a view toward minimizing domestic dissemination. Despite efforts to minimize domestic dissemination, U.S. broadcasting to Cuba can be accessed domestically through several means. Both the shortwave and AM radio broadcasts can be heard in parts of Florida. In addition, TV Martí programming on TV Azteca can be seen in Miami by those with local cable or DirecTV subscriptions. Furthermore, streaming video from TV Martí and audio from Radio Martí can be retrieved from OCB’s Web site. BBG lacks a formal, written policy for determining whether commercials aired during or after BBG broadcasts are appropriate. However, IBB’s standard practice is to include standard language relating to the inclusion of advertisements during BBG (TV Martí, in this case) broadcasts in their contracts with other broadcasters. That standard language explicitly prohibits “political advertising immediately before, after, or during the BBG provided programming.” Other than political advertisements, no other content is explicitly prohibited. In December 2006, IBB contracted with a Miami-based television station, TV Azteca, to broadcast two nightly TV Martí newscasts. The contract provided TV Martí with two 26-minute windows of airtime that would be broadcast locally in Miami and be viewable in Cuba to those who subscribe to DirecTV and purchase the local Miami programming package. The remaining 4 minutes of the half hour are used by TV Azteca to air commercials. The following concerns have been raised regarding these commercials: First, some critics believe that the mere existence of these commercials is inappropriate. They believe that, as a U.S. government-funded broadcast, there should be no advertisements for commercial products or services. However, we found, consistent with BBG’s legal assessment, that no U.S. law, regulation, or BBG policy or practice prohibits the airing of advertisements during TV Martí broadcasts. Second, some OCB employees complained that the content of some commercials shown during the TV Martí programming is inappropriate. For example, they reported viewing political advertisements and commercials for a 1-900 phone sex service during TV Martí programming on TV Azteca. We subsequently confirmed that advertisements for a U.S. presidential candidate aired in September 2008. We also viewed an advertisement for a “Love Calculator,” which aired in April 2008. The contract with TV Azteca did not include the standard language prohibiting political advertising during TV Martí broadcasts. A BBG official suggested that this error could have occurred as a result of staff turnover in the final phase of the negotiation and drafting of the contract. As we have previously reported, this contract was awarded with limited involvement of contracting officials. According to BBG, OCB requested, in October 2008, that TV Azteca air the TV Martí broadcasts for 26 consecutive minutes and that any advertisements be shown after the TV Martí programming. BBG’s Acting General Counsel indicated that the contract would be modified to reflect this change. After informing BBG staff of our findings related to the content of some commercials aired during TV Martí programming on TV Azteca, BBG officials acknowledged that the airing of political advertisements is inappropriate. In October 2008, BBG requested that TV Azteca stop airing political advertisements during TV Martí programming. In response, TV Azteca agreed to cease airing political advertisements during TV Martí programming. According to State records, since 2003, the Cuban government has filed more than 300 specific complaints that U.S. broadcasting to Cuba violates international broadcasting regulations. The ITU, which is the leading United Nations organization for information and communication technologies, develops these regulations. The Cuban government has consistently objected to U.S. television broadcasts to Cuba. The FCC has authorized OCB to broadcast on television channels 13 and 20. Cuba alleges that this U.S. broadcasting causes harmful interference to its own broadcasting on television channels 13 and 20, which it has registered with the ITU. In 2004 and 2006, the ITU determined that U.S. broadcasting on channels 13 and 20, respectively, was causing harmful interference and encouraged the United States and Cuba to cooperate and find a solution for solving the harmful interference. State indicated that no action has been taken in response to the ITU’s determinations that U.S. broadcasts cause harmful interference. Officials from State indicated that the ITU’s determinations were based on information provided solely by the Cuban government, and that the United States has not independently verified that the broadcasting is causing harmful interference. The Cuban government also has complained to the ITU about U.S. radio broadcasts to Cuba. Recently, Cuba has filed complaints regarding U.S. broadcasting on the 530 AM frequency. However, the ITU determined in December 2004 that since Cuba has not registered a station on that AM frequency, it cannot complain about harmful interference on that frequency. The Cuban government has further argued that U.S. broadcasting from an airborne platform violates ITU regulations. Following Cuban complaints, at the ITU World Radiocommunication Conference in November 2007, a report was adopted that stated that broadcasting from an aircraft for the purpose of transmitting solely to the territory of another country without its permission was not in conformity with ITU regulations. The U.S. government disassociated from that statement in the report as not accurately representing the ITU Radio Regulations and reiterated its policy of broadcasting information to the Cuban people. Several groups, including BBG, IBB, and the State OIG, provide oversight of OCB operations. Oversight efforts by these various groups have identified three categories of concerns in recent years: poor communication by OCB management, low employee morale, and allegations of fraud and abuse. In responding to recent audit reports, BBG and OCB have taken steps to address nearly all of the audit recommendations. Several groups perform oversight of OCB operations. BBG and its staff perform oversight in multiple ways. BBG holds a monthly meeting at which the head of each broadcast entity (including the Director of OCB) updates the BBG Governors on the key efforts of their entity. BBG also conducts a statutorily mandated annual review of the effectiveness of its broadcasts. According to BBG staff, this process (called Language Service Review) is a comparative review designed to evaluate the need for adding or deleting language services and strategically allocating funds to the language services on the basis of priority and impact. To facilitate this process, BBG staff prepare summary data and narrative for each language service, covering such issues as audience reach, budget, and program quality rating. BBG staff also oversee OCB through unscheduled but regular communication on various issues, such as budget and finances. IBB’s efforts to oversee OCB take three main forms. First, OCB participates in a daily editorial meeting with VOA and IBB staff to discuss what news stories each entity will be covering that day. According to IBB’s Deputy Director, participation in such meetings can help coordinate entities’ coverage of stories and ensure that each entity is covering all of the relevant news events. Second, as we have previously discussed, IBB performs annual program reviews of Radio and TV Martí. According to IBB’s Deputy Director, the program review process is intended to provide quality control by objectively evaluating OCB’s broadcasting services once a year and recommending improvements in their broadcasting. Third, IBB participates in and oversees OCB’s handling of strategic issues, such as using an aircraft to broadcast TV Martí programming. The State OIG has performed three reviews of OCB since 1999. These reviews have covered a variety of issues—including strategic planning, security, audience research, and contracting—and have resulted in multiple recommendations for improvement. In addition to the inspections and audits focused on OCB operations, the State OIG has also conducted reviews of BBG and IBB operations that affect OCB. For example, in May 2006, the State OIG issued a report related to IBB’s Office of Performance Review, which conducts the annual program review process for OCB and VOA. In addition, in July 2007, the State OIG released the results of its inspection of USINT, which sometimes assists U.S. broadcasting to Cuba. In addition, OCB employees have multiple outlets to raise concerns regarding management and personnel issues. OCB employees can seek assistance from their employee union to address concerns regarding working conditions. The union has two stewards who work at OCB headquarters in Miami. OCB employees can also raise concerns about equal employment opportunity issues with IBB’s Office of Civil Rights. Two OCB employees serve as liaisons between OCB employees and the Office of Civil Rights by receiving and working to address employee concerns. IBB’s Office of Human Resources also has a full-time staff person at OCB who, in addition to other administrative responsibilities, receives employee complaints regarding mismanagement. BBG, IBB, and OCB staffs have mixed views regarding whether OCB’s location in Miami inhibits effective oversight of OCB operations. BBG and IBB management reported that OCB’s location does not inhibit their efforts to oversee it. They noted that they are in regular contact with OCB management by telephone and e-mail. They also noted that the monthly BBG board meetings (one of which is held in Miami each year) provide sufficient personal contact with OCB management. Some OCB employees, however, expressed concern regarding what they perceive as a lack of oversight or involvement by BBG and IBB. One employee commented that OCB seemed to be “out of sight and out of the minds” of BBG and IBB. Other OCB employees suggested that more regular visits by BBG or IBB staff to OCB would enhance their understanding of OCB’s operations and management. In recent years, three categories of problems have been raised regularly regarding OCB operations. First, some OCB employees reported poor communication from senior OCB management. Prior GAO work has shown the benefits of maintaining continuous dialogue between management and employees to share information and address workplace issues. However, in responding to the Office of Personnel Management’s 2007 annual employee survey, more than half of OCB employees responding disagreed or strongly disagreed with the statement that they are satisfied with the information they received from management on what is going on in the organization. Several OCB employees expressed concern to us specifically regarding the lack of any formal systems for disseminating information from management to staff or for staff to provide input into management decisions. They expressed frustration with the lack of regular staff meetings and absence of an employee newsletter to improve communication. However, despite an informal recommendation from the State OIG, OCB management has not established any formal or regular mechanisms for communicating with staff, such as regular staff meetings or newsletters. In response, OCB senior management noted that there are frequent meetings between the OCB Director and senior managers to discuss various issues, but that it is the responsibility of managers to brief their staff on current issues and hold regular staff meetings. Second, employee morale has been a concern at OCB. For example, a majority of OCB employees responding to the Office of Personnel Management’s 2007 annual employee survey either disagreed or strongly disagreed with the statement that they are satisfied with their involvement in decisions that affect their work. Our interviews with some employees in Miami also confirmed that employee morale is a concern. Relating to the issue of employee morale, BBG management and OCB employees expressed differing views regarding the current director’s management of OCB. BBG and IBB management praised his leadership style and told us that he has made numerous improvements in OCB’s organization and broadcast quality. In 2007, the State OIG praised the director as a “hands-on manager and an assertive, inspiring leader.” At the same time, the State OIG acknowledged that his management style has intimidated some employees. Similarly, we spoke with some OCB employees who view him as a “micromanager” with excessive involvement in the editorial content of OCB programming. Third, a variety of allegations regarding fraud and abuse have been raised. For example, according to BBG officials, they referred one case of suspected fraud to the State OIG. As a result, in 2007, an OCB employee was sentenced to serve 27 months in prison and required to pay a monetary fine for taking kickbacks from a production company doing business with OCB. Other allegations, however, have not been substantiated. From November 2007 through May 2008, our Office of Forensic Audits and Special Investigations interviewed former and current employees alleging mismanagement at OCB. Employee allegations included, among other things, time and attendance abuse, improper hiring practices, contracting improprieties, and excessive travel by OCB managers. Our investigators requested documentation from employees that would support their allegations. Although investigators received some documentation, it was insufficient to pursue further investigation. Therefore, while investigators found some indications of mismanagement, much of the evidence was anecdotal or hearsay and did not provide a sufficient basis to continue the investigation. Data from BBG’s Office of Civil Rights show that the number of complaints that OCB employees have filed recently averages fewer than 3 per year. Staff from the Office of Civil Rights suggested that this represents an improvement from previous time periods when a larger number of complaints were filed, and attributed this improvement to the management style of the current OCB Director. Since 2003, the Office of Civil Rights has received 15 formal complaints from OCB employees. The most frequently cited reasons for complaints were reprisal and discrimination on the basis of gender or national origin. According to the Director of the Office of Civil Rights, a few cases were settled and managers prevailed in the remainder of the 12 cases that have been completed. In placing these most recent complaints in context, staff from the Office of Civil Rights indicated the following: A small number of OCB employees accounts for a majority of the complaints. Since 2003, 4 employees have been responsible for 9 of the 15 complaints filed by OCB employees. The number of equal employment opportunity complaints filed by OCB employees was substantially higher during the tenure of other OCB Directors. In their experience, other BBG broadcast entities have more frequent equal employment opportunity complaints than OCB. However, some OCB employees told us that the current outlets for expressing concerns are ineffective. For example, an OCB employee union representative indicated that in numerous cases, OCB management has ignored or insufficiently addressed union members’ concerns. In addition, some employees expressed fear of reprisal by managers if they raise concerns. As we have previously discussed, external auditors have conducted several reviews in recent years related to U.S. broadcasting to Cuba. Those reviews have led to numerous recommendations for improvement to BBG, IBB, and OCB. Most notably, in its 2003 and 2007 inspection reports of OCB, the State OIG made 20 formal recommendations to improve OCB operations. These recommendations addressed a variety of issues related to OCB operations, including audience research, contracting, adherence to journalistic standards, and strategic planning. Of those, the State OIG considers 17 of the recommendations to be implemented. OCB officials indicated that other recommendations related to physical security at OCB headquarters will also be addressed soon. IBB staff are responsible for tracking the status of ongoing and completed audits related to all BBG entities and providing monthly reports to ensure that IBB management and BBG staff are aware of such ongoing activities. According to BBG officials, this is performed mainly to ensure that BBG staff are aware of auditors’ ongoing inquiries. IBB staff maintain a paper file for each audit, and, if a report is published and contains recommendations, the file would maintain the report and any follow-up documentation related to compliance. BBG officials stated that they are developing a database that can be used to easily access information regarding the compliance status of various audit recommendations. Once this is completed, the database will contain information that can be used by BBG staff and the Board of Governors to perform their oversight responsibilities. Broadcasting to Cuba has been an important part of U.S. foreign policy toward Cuba for more than two decades. Despite OCB’s recent efforts to broadcast Radio and TV Martí using additional transmission methods at a significant cost, the best available research indicates that OCB’s audience size is small. However, OCB believes that these results do not reflect the true size of its audience in Cuba, citing the challenges to conducting valid audience research in Cuba and anecdotal reports it receives from Cubans. With a new President and Congress, the United States has a fresh opportunity to reassess the purpose and effectiveness of U.S. radio and television broadcasting to Cuba. To assist decisionmakers in formulating the U.S. broadcasting strategy and making funding decisions, BBG and OCB need to ensure that they have articulated a clear strategy and assembled data to help decisionmakers assess the effectiveness and return on investment of OCB’s various transmission methods. In addition to the need for a clear strategy to guide current and future policy direction, which OCB and BBG are developing, it is important to have systems and processes in place to enable the efficient and effective operation of OCB. To help ensure that U.S. broadcasting to Cuba is informed by all available audience research, it is important to enhance coordination among U.S. agencies and grantees that perform such research. Additionally, to better ensure that U.S. broadcasting to Cuba is in compliance with journalistic standards, the lack of training for OCB staff needs to be addressed and guidance and training for IBB program analysts who conduct reviews of OCB’s adherence to journalistic standards should be enhanced. Furthermore, to improve morale within the organization, OCB management should take steps to address persistent concerns with its communication and interaction with OCB staff. Finally, to avoid the diminution of the reputation of U.S. government-funded broadcasting, it is important that advertisements containing inappropriate material are not shown during OCB broadcasts. To assist decisionmakers and improve OCB’s strategy, we recommend that the Broadcasting Board of Governors take the following two steps: Conduct an analysis of the relative success and return on investment of broadcasting to Cuba, showing the cost, nature of the audience, and challenges—such as jamming and competition—related to each of OCB’s transmission methods. The analysis should also include comprehensive information regarding the media environment in Cuba to better understand the extent to which OCB broadcasts are attractive to Cubans. Coordinate the sharing of information among U.S. agencies and grantees regarding audience research relating to Radio and TV Martí. To improve OCB operations, we recommend that Broadcasting Board of Governors take the following four actions: Direct IBB to enhance guidance and training for analysts performing program reviews. Direct OCB to provide training to OCB staff regarding journalistic standards. Direct IBB to develop guidance and take steps to ensure that political and other inappropriate advertisements are not shown during OCB broadcasts. Direct OCB to establish formal mechanisms for disseminating information to and obtaining views from employees to help improve communication and morale. We provided a draft of this report to the Broadcasting Board of Governors and the Department of State. Their technical comments are included in this report as appropriate. In addition, BBG provided formal comments, which are reprinted in appendix II. BBG indicated that it is in general agreement with all of the recommendations and will move to implement them, to the degree practicable. BBG also suggested that the draft report at times did not fully reflect the difficulties in broadcasting to a closed society or in evaluating the reach of broadcasts to a closed society. We believe the report addresses both issues appropriately. Regarding the difficulties in broadcasting to a closed society, the report has separate sections (in which BBG, IBB, and OCB officials are frequently cited) that discuss the challenges posed by Cuban government jamming and competitors in the Cuban media environment. Regarding the difficulties in evaluating the reach of broadcasts to Cuba, the report clearly acknowledges that significant challenges exist to conducting valid audience research in Cuba. For example, the report discusses the prohibition on conducting in-person audience research in Cuba and the lack of nationally representative data from telephone surveys. BBG also suggested that the draft report’s discussion of a lack of a strategic plan was somewhat misleading. While the report acknowledges coordination has occurred on some strategic issues, OCB’s draft strategic plan (which was first presented in July 2007) has yet to be approved. We believe an approved strategic plan would be particularly valuable to decisionmakers as the new Congress and Administration seeks to formulate the U.S. broadcasting strategy and make funding decisions. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of State, and the Broadcasting Board of Governors. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or FordJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in appendix III. To examine the Office of Cuba Broadcasting’s (OCB) approach for broadcasting to Cuba and what is known about the size of its audience, we reviewed and analyzed strategic, programmatic, budget, and audience research documents from the Broadcasting Board of Governors (BBG), International Broadcasting Bureau (IBB), OCB, and Department of State (State). To describe OCB’s approach, we reviewed BBG’s strategic plan for 2008-2013 and OCB’s draft strategic plan and interviewed officials at BBG, IBB, and OCB regarding strategic planning exercises. To analyze OCB’s approach for broadcasting to Cuba, we reviewed relevant documentation—including OCB and IBB data on the cost, broadcast schedule, geographic coverage, and effectiveness of Radio and TV Martí’s various transmission methods—and interviewed OCB and IBB officials. We also visited some of the sites from where OCB broadcasts Radio and TV Martí, including OCB’s medium-wave radio station in Marathon, Florida, and AeroMartí’s station in Key West, Florida, and interviewed OCB staff and contractors based at those locations. To describe the makeup of OCB’s budget, we obtained OCB data regarding its fiscal year 2008 budget. We determined that these data were sufficiently reliable for the purpose of identifying the main categories and general budget levels for each category. To identify the available information regarding the size of OCB’s audience, we analyzed IBB and OCB audience research from 2003 through 2008, including telephone surveys, focus group studies, and anecdotal reports of reception. To assess the reliability of these data, we interviewed BBG, IBB, and OCB officials, as well as IBB and OCB audience research contractors, regarding the methodology for collecting the data. We also observed an OCB-commissioned monitoring panel and a Radio Martí program review meeting with IBB and OCB officials to review and analyze the results of audience research. In addition, we analyzed IBB documents explaining the methodology for conducting various audience research efforts. We determined that these data were sufficiently reliable for the purpose of characterizing the size of Radio and TV Martí’s audience in very broad terms for the populations the surveys reached. However, the fall in reported audience size in the 2008 IBB telephone survey does raise some questions about the accuracy of that survey. To analyze the impact of Cuban government jamming on OCB’s broadcasts, we reviewed OCB documents—including an assessment of Cuban jamming capabilities and a study conducted by AeroMartí’s primary contractor on the airplane’s capabilities—and interviewed IBB and OCB engineers and AeroMartí’s contractor. To analyze the effect of competition on OCB broadcasts, we reviewed IBB telephone surveys and interviewed OCB and IBB officials. To assess the extent of interagency coordination, we reviewed relevant documentation, including the Commission for Assistance to a Free Cuba reports, and interviewed BBG, OCB, U.S. Agency for International Development, and State officials. To review how BBG and OCB ensure compliance with journalistic principles, we reviewed documentation on journalistic standards, including Voice of America’s (VOA) Charter and OCB’s editorial guidelines, as well as IBB’s qualitative and quantitative assessments of Radio and TV Martí’s broadcast content. To understand IBB’s process for assessing OCB broadcast content, we observed a June 2008 TV Martí follow-up meeting and a September 2008 Radio Martí program review meeting and interviewed BBG, IBB, and OCB officials. To assess OCB compliance with journalistic standards, we analyzed IBB program review documentation from 2003 to 2008, including IBB’s qualitative reviews of OCB’s broadcast content, IBB’s content and production performance scores for OCB and VOA broadcasts, and IBB actions plans. We also interviewed IBB officials responsible for overseeing the performance review process and the IBB program analyst who performed the reviews of Radio and TV Martí. To assess the quality of IBB reviews of OCB broadcast content, we systematically analyzed IBB reviews of Radio and TV Martí broadcast content from 2003 to 2008. For each review, we determined whether and to what extent the review report identified information, such as the scope of the review, overall judgments regarding compliance with journalistic standards, and the frequency or severity of problems cited. In addition, we reviewed the results of prior audit work regarding the program review process. To identify the amount of training on journalistic standards offered to OCB employees, we reviewed OCB training records and interviewed OCB staff. To describe the efforts taken to ensure that U.S. broadcasting to Cuba complies with relevant domestic and international broadcasting standards, we reviewed legislation authorizing U.S. radio and television broadcasting to Cuba and legislation prohibiting domestic dissemination of public diplomacy information intended for foreign audiences. We also interviewed BBG officials regarding the steps taken to minimize domestic dissemination of Radio and TV Martí programming. In addition, we interviewed a representative of TV Azteca and obtained documents related to political advertisements and commercials aired during September 2008. Furthermore, we interviewed and obtained video clips from OCB employees regarding commercials aired by TV Azteca during TV Martí broadcasts. We also reviewed documents from the U.S. government, Cuban government, and International Telecommunication Union (ITU) regarding U.S. broadcasting to Cuba’s adherence to ITU regulations. Finally, we interviewed officials from State and the Federal Communications Commission about the history of U.S.-Cuban disputes regarding international broadcasting and the current U.S. position regarding broadcasting to Cuba. To identify oversight and management challenges related to OCB and analyze the efforts undertaken to address those challenges, we reviewed prior audit reports by GAO and the State Office of Inspector General. We also interviewed BBG staff and reviewed BBG documentation regarding the steps taken to implement prior audit recommendations. Additionally, we analyzed BBG data regarding official complaints by OCB employees since 2003 to describe the nature of the complaints. Furthermore, we interviewed BBG, IBB, and OCB officials regarding oversight and management challenges and the steps taken to address those challenges. Finally, we interviewed OCB staff regarding current and historical management and oversight challenges. We conducted this performance audit from March 2008 to January 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, John Brummet (Assistant Director), Jason Bair, Emily Gupta, Natalie Sirois, Etana Finkler, Martin de Alteriis, Ernie Jackson, and Adrienne Spahr made key contributions to this report. Joseph Carney, John Hutton, Timothy DiNapoli, Katherine Trimble, Justin Jaynes, Leigh Ann Nally, Bruce Causseaux, Gary Bianchi, Ryan Geach, Madhav Panwar, R. Gifford Howland, Jennifer Young, Charlotte Moore, Armetha Liles, and Colleen Miller also provided assistance.
For more than two decades, the U.S. government has been broadcasting to Cuba to break the Cuban government's information blockade and promote democracy in Cuba. Over this period, questions have been raised regarding the quality and effectiveness of these broadcasts. GAO was asked to examine (1) the Office of Cuba Broadcasting's (OCB) broadcasting approach and what is known about its audience; (2) how the Broadcasting Board of Governors (BBG)--which oversees U.S. government broadcasting--and OCB ensure compliance with journalistic principles; (3) steps taken to ensure adherence to domestic and international broadcasting laws, agreements, and standards; and (4) steps BBG and OCB have taken to address management challenges. GAO analyzed documentation related to strategic planning, audience research, oversight, and operations and interviewed officials from BBG, BBG's International Broadcasting Bureau (IBB), OCB, State, and other agencies. OCB broadcasts Radio and TV Marti through multiple transmission methodsthat face varying levels of jamming by the Cuban government. While there are no nationally representative data and some surveys of recent Cuban emigres suggest a larger audience, the best available research suggests that Radio and TV Marti's audience is small. Specifically, less than 2 percent of respondents to telephone surveys since 2003 reported tuning in to Radio or TV Marti during the past week. Despite the importance of audience research, we found minimal sharing of such research among available sources. Because of limitations in the audience research data, decisionmakers lack basic information to help assess the relative success or return on investment from each of OCB's transmission methods. BBG's IBB--which directly oversees OCB--has established an annual program review process that serves as the main mechanism for assessing OCB's compliance with journalistic standards. While IBB officials report that the quality of OCB programming has improved in recent years, IBB reviews since 2003 have recommended improving adherence to certain journalistic standards, particularly in the areas of balance and objectivity. IBB's process provides useful feedback, but we found weaknesses such as limited training and operational guidance for staff conducting the reviews. OCB and IBB have taken steps to ensure that U.S. broadcasting adheres to relevant laws and standards, but some concerns remain. To comply with U.S. law, they have taken steps to minimize the domestic dissemination of OCB programming; however, OCB broadcasts reach U.S. audiences in several ways, such as through the Internet. In addition, a commercial TV station contracted to broadcast OCB programming showed some inappropriate advertisements during OCB programs. Furthermore, an international body found that OCB's TV broadcasts cause harmful interference to Cuban broadcasts, but the U.S. government has not taken steps to address this issue. Despite some efforts by BBG and OCB, oversight entities have identified problems such as poor communication by OCB management and low employee morale. For example, OCB lacks formal mechanisms for communicating with or obtaining information from employees.
The Corps’ Civil Works program is responsible for investigating, developing, and maintaining the nation’s water and related environmental resources. In addition, the Civil Works program provides disaster response as well as engineering and technical services. The Corps’ headquarters is located in Washington, D.C., with eight regional divisions and 38 districts that carry out its domestic civil works responsibilities. Each year, the Corps’ Civil Works program receives funding through the Energy and Water Development Appropriations Act. The act normally specifies a total sum for several different appropriation accounts, including investigations, construction, and operation and maintenance, to fund projects related to the nation’s water resources. The funds appropriated to the Corps are “no year” funds, which means that they remain available to the Corps until spent. The conference report accompanying the Energy and Water Development Appropriations Act specifically lists individual investigations, construction, and operation and maintenance projects and the amount of funds designated for each project. In effect, the conference report provides the Corps with its priorities for accomplishing its water resource projects. In general, the Corps becomes involved in water resource projects when a local community perceives a need and contacts the Corps for assistance. If the Corps does not have the statutory authority required for the project, the Congress must provide authorization. After receiving authorization, generally through a committee resolution or legislation and an appropriation, a Corps district office conducts a preliminary study on how the problem could be addressed and whether further study is warranted. When further study is warranted, the Corps typically seeks agreement from the local sponsor to share costs for a feasibility study. The Congress may appropriate funds for the feasibility study, which includes an economic analysis that examines the costs and benefits of the project or action. The local Corps district office conducts the feasibility study that is subject to review by the Corps’ division and headquarters offices. The feasibility study makes recommendations on whether the project is worth pursuing and how the problem should be addressed. The Corps also conducts needed environmental studies and obtains public comment on them. After those are considered, the Chief of Engineers transmits the final feasibility and environmental studies to the Congress through the Assistant Secretary of the Army for Civil Works and the Office of Management and Budget. The Congress may authorize the project’s construction in a Water Resources Development Act or other legislation. Once the project has been authorized and after the Congress appropriates funds, construction can begin. Figure 1 shows the major steps in developing a civil works project. Reprogramming is the shifting of funds from one project or program to another within an appropriation or fund account for purposes other than those contemplated at the time of appropriation. A reprogramming transaction changes the amount of funds provided to at least two projects– the donor project and the recipient project. However, more than two projects are often involved in a single reprogramming action. For example, in an effort to make effective use of available funding, the Corps may move funds from a construction project that has slipped due to inclement weather and reprogram the funds to one or more construction projects that are ahead of schedule or experiencing cost overruns. The authority to reprogram funds is implicit in an agency’s responsibility to manage its funds; no specific additional statutory authority is necessary. While there are no government-wide reprogramming guidelines, the Congress exercises control over an agency’s spending flexibility by providing guidelines, or non-statutory instructions, on reprogramming in a variety of ways. For example, some reprogramming and reporting guidelines have evolved from informal agreements between various agencies and their congressional oversight committees. Our review of four Civil Works projects or actions found that the cost and benefit analyses the Corps used to support these actions were fraught with errors, mistakes, and miscalculations, and used invalid assumptions and outdated data. The Corps’ analyses often understated costs and overstated benefits. As such, we concluded that they did not provide a reasonable basis for decision-making. In two instances, we also found that the Corps’ three-tiered review process, consisting of district, division, and headquarter reviews, did not detect the problems we uncovered. These instances raised concerns about the adequacy of the Corps’ internal reviews. Our review of the Corps’ cost and benefit analysis of the Delaware River channel-deepening project found that it contained a number of material errors. For example, the Corps misapplied commodity rate projections, miscalculated trade route distances, and included benefits for some import and export traffic that had seriously declined over the last decade. As a result, the Corps’ estimate of project benefits was substantially overstated. We found that project benefits for which there was credible support were about $13.3 million a year compared with the $40.1 million a year claimed by the Corps’ 1998 report. Specifically, we found that the Corps significantly overestimated the growth in oil import traffic for 1992 through 2005 because it used an incorrect commodity growth rate for part of the period. Use of this rate resulted in the Corps overestimating benefits by about $4.4 million. Additionally, the Corps’ estimate contained a computer error that overestimated this same benefit by another $4.7 million. Finally, the Corps’ project benefits attributed to the import and export of commodities such as scrap metal, iron ore, and coal were overstated by about $2.7 million. Conversely, the Corps’ cost estimate for the project contained a number of positive and negative errors that in aggregate would have reduced project costs slightly but not enough to make up for the significant decrease in project benefits. We found that the Corps’ three-tiered quality control process of the Corps, consisting of district, division, and headquarters offices, was ineffective in detecting or correcting the significant miscalculations, invalid assumptions, and outdated information in the cost and benefit analysis that our review revealed. In response to our report, the Corps conducted a reanalysis of the project with updated, more complete information. This reanalysis asserted that the project could be built for $56 million less than the Corps had previously estimated. As we recommended, the Corps also had its reanalysis reviewed by an external party. Our review of the Oregon Inlet Jetty project found that the Corps’ most recent cost benefit analysis of the project, issued in 2001, had several limitations, and as a result did not provide a reliable basis for deciding whether to proceed with the project. The Corps’ analysis did not consider all alternatives to the project, used outdated data to estimate benefits to fishing trawlers, did not account for the effects on smaller fishing vessels, and used some incorrect and outdated data to estimate damage and losses to fishing vessels. For example, the Corps did not evaluate alternatives to the jetty project and 20-foot deep channel that it proposed, although many vessels that currently use the inlet could have benefited from a shallower and less costly channel-deepening project. Further, the Corps used outdated data to estimate benefits of the project to larger (75-foot long) fishing trawlers that resulted in a significant overestimate of benefits. We determined that if the Corps had incorporated more current data on the actual number of trawlers that used the inlet in its analysis, benefits would have been reduced by about 90 percent, from over $2 million annually to less than $300,000. Conversely, the Corps did not estimate the benefit to the smaller fishing vessels that use the inlet. However, since these vessels could have a shallower draft than the large vessels they might not have benefited from the deeper channel and jetty that was proposed to benefit larger vessels. Additionally, the Corps miscalculated benefits due to a reduction in the damages that would occur to trawlers because of accidents that occur due to the conditions in the inlet. The Corps overestimated these benefits because it assumed, based on anecdotal evidence, that all of the 56 commercial fishing vessels regularly using the inlet would be damaged during the year and would incur about $7,000 each in damages. Our review of Coast Guard data showed that only about 10 commercial fishing vessels actually reported damages during the time frame the Corps considered, these damages averaged about $1,700 per year. Because of the concerns raised by our report, the Corps, the Council on Environmental Quality, and the Departments of Interior and Commerce mutually agreed not to proceed with this project. Our review of the Corps’ Common Features project, which is intended to provide flood protection to the Sacramento area, found that the Corps did not fully analyze likely cost increases or report them to the Congress in a timely manner. The Corps also incorrectly calculated project benefits because it overstated the number of properties protected by about 20 percent and used an inappropriate methodology to calculate the value of protected properties. After a 1997 storm demonstrated vulnerabilities in the project, the Corps substantially changed the design of the project but did not analyze likely cost increases. Some of the design changes led to substantial cost increases. For example, in some areas the Corps tripled the depth from almost 20 to almost 60 feet of cutoff walls designed to prevent seepage beneath the levees. The Corps also decided to close gaps in the cutoff walls in areas where bridges or other factors caused gaps. These changes added $24 million and $52 million, respectively, to a project that was originally, in 1996, estimated to cost $44 million. By the time the Corps reported these cost increases to the Congress in 2002, it had already spent or planned to spend more than double its original estimated cost of the project. The Corps also made mistakes in estimating the benefits from this project because in 1996 it incorrectly counted the number of properties protected by the project by almost 20 percent and incorrectly valued these protected properties. Although the Corps updated its benefit estimate in 2002 to reflect new levee improvements authorized in 1999, we found that even this reanalysis contained mistakes in estimating the number of properties that would be protected and therefore continued to estimate higher benefits from the project than would be warranted. As with the Delaware River Deepening study, we found that all three organizational review levels within the Corps reviewed and approved the benefit analyses for this project, but these reviews did not identify the mistakes that we found. The Corps concurred with our report’s recommendations and is working on a General Reevaluation Report for the uncompleted portions of the project that is due in the spring of 2007. In a 2000 report to the Congress, the Corps recommended that one of its dredges remain in a reserve status and that another be added to that status. However, we found that the Corps could not provide support for these conclusions and that its cost and benefit analyses supporting these conclusions had analytical shortcomings. We also found that the Corps did not perform a comprehensive analysis of the ready reserve program and in fact could not provide any documentation of what analysis, if any, it had done. In addition, the Corps’ recommendation that the reserve program be continued because it was beneficial was contradicted by evidence in the report showing that the price the government paid for dredging was higher after a Corps dredge was placed in reserve than before. We also questioned whether it was prudent to add another dredge to the reserve fleet without a comprehensive analysis in light of the fact that the dredge needed significant repairs to remain in service, even in reserve. We also determined that the Corps had used outdated data and used an expired policy that could raise the government’s cost estimate for hopper dredging work. This cost estimate is pivotal in determining the reasonableness of private contractor bids. If all bids exceed the government estimate by more than 25 percent, the Corps may elect to perform the work itself. Moreover, in making its estimate, the Corps had not obtained comprehensive industry data since 1988 although it had obtained some updated data for some cost items. In addition, the Corps used a policy on estimating transit costs that had expired in 1994. Use of this policy could significantly raise the estimate of transit costs for dredging contracts. For example, in one case, using the Corps’ policy resulted in a transit cost estimate of about $480,000 as opposed to about $100,000 if the expired policy was not used. As a result of our review, a conference committee report directed the Corps to report to the Appropriations Committees a detailed plan of how it intended to rectify the issues raised in our report. On June 3, 2005, the Corps issued a revised report to the Congress on its plans for the hopper dredge fleet. The Corps’ reprogramming guidance states that only funds surplus to current year requirements should be a source for reprogramming and that temporary borrowing or loaning is inconsistent with sound project management practices and increases the Corps’ administrative burden. However, we recently reported that, over a two year period (fiscal years 2003 through 2004), the Corps moved over $2.1 billion through over 7,000 reprogramming actions. This movement of funds occurred because during these two years the Corps managed its civil works project funds using a “just-in-time” reprogramming strategy. The purpose of this strategy was to allow for the movement of funds from projects that did not have urgent funding needs to projects that need funds immediately. While the just-in- time approach may have moved funds rapidly, its implementation sometimes resulted in uncoordinated and unnecessary movements of funds from project to project. In our review of projects from fiscal years 2003 and 2004, we found that funds were moved into projects, only to be subsequently revoked because they were excess to the project’s funding needs. For example, in fiscal year 2004, 7 percent of the funds (totaling almost $154.6 million) from every non-earmarked construction project were revoked in order to provide funding to projects designated as “national requirements” by the Corps. The national requirements projects were a group of projects for which Corps headquarters management had promised to restore funding that had been revoked in previous years. However, after the Corps moved funds into the national requirements projects, the Corps revoked over a quarter of the funds, $38.8 million, from these projects because they actually did not need the funds. For example, one national requirements construction project, New York and New Jersey Harbor, received $24.9 million. All of these funds, plus an additional $10.3 million, were excess to the needs of the project at the time and were subsequently reprogrammed to other projects. Corps officials in the New York District told us that, prior to receiving the national requirements funds they had informed Corps headquarters that they could not use these funds. We also found that the use of the just-in-time strategy resulted in funds being removed from projects without considering their near-term funding requirements, such as projects with impending studies. For example, on August 1, 2003, the Corps revoked $85,000 from the Saw Mill River and Tributaries investigation project in New York because the funds were excess to the project’s needs in the current year. Six weeks later, however, on September 15, 2003, $60,000 of funding was reprogrammed into the project because they were needed to initiate a feasibility study. Corps documents explaining the revocation of funds from the Saw Mill River and Tributaries project indicate that the Corps was aware of the project’s impending needs, and knew that the project would need funds again in September 2003 to execute a feasibility study. Further, under the just-in-time reprogramming strategy, funds were moved into and out of the same project on the same day as well as numerous times within a fiscal year. Overall, 3 percent of investigations and construction projects in fiscal year 2003 and 2 percent of investigations and construction projects in fiscal year 2004 moved funds into and out of the same project on the same day. For example, in fiscal year 2003, the Corps used 18 separate actions to reprogram approximately $25 million into, and about $10.5 million out of, the Central and Southern Florida construction project, including three separate occasions when funds were both moved into and out of the project on the same day. The just-in-time reprogramming strategy also moved money into and out of projects without regard to the relative priorities of the projects. During the period of our study, the Corps lacked a set of formal, Corps-wide priorities for use when deciding to reprogram funds from one project to another. Instead, according to the Chief of the Civil Works Programs’ Integration Division, during fiscal years 2003 and 2004, reprogramming decisions were left up to the intuition of program and project managers at the district level. While this decentralized system might have allowed for prioritized decision-making at the district level, when reprogramming actions occurred across districts or across divisions, the Corps lacked any formal system of evaluation as to whether funds were moving into or out of high- priority projects. The lack of a Corps-wide priority system limits the Corps ability to effectively manage its appropriations, especially in an era of scarce funding resources when choices have to be made between competing needs of donor and recipient projects. Finally, the Corps’ practice of allocating all funds to projects as soon as the funds are allotted to the Corps, coupled with the reprogramming flexibility provided to the districts, may result in an elevated number of reprogramming actions. Typically, once the Corps receives appropriated funds from the Congress, the Corps disperses all of these funds directly into project accounts at the district level. Allocating funding in this manner could result in some projects receiving more money than they are able to spend. In some cases that we reviewed, the Corps dispersed an entire fiscal year’s worth of funding to a project even though they knew that the project manager could not spend all of the funding. The flexibility provided to district managers once they receive their funding may also increase the number of reprogramming transactions. According to some Corps program managers, the relative ease of conducting reprogramming actions at the district level, without the need to obtain division or headquarters approval, creates incentives for project managers to transfer funds among projects within the district even if it creates a greater number of reprogramming actions. For example, when project managers have an immediate need for funds, they may be more likely to reprogram funds between projects within their own district, even if the donor project has a need for funds in a few weeks or months, because Corps guidance allows them to do so. The Corps’ reprogramming practices place a large demand on the administrative resources of the agency. In fiscal year 2003, after receiving their appropriated funds from the Congress, the Corps conducted at least one reprogramming action every business day of the fiscal year except for 4 days; after receiving its funds in fiscal year 2004, the Corps conducted at least one reprogramming action on every business day of the fiscal year except for 14 days. Each reprogramming action conducted requires the Corps to expend time and personnel resources to locate donor projects, file necessary paperwork, and in some cases obtain the approval of appropriate Corps staff and, possibly, the Congress. In particular, locating sources of donor funding is often a time-consuming process, as the project manager seeking funding must wait for other project managers to acknowledge excess funds and offer them for use on other projects. In response to the findings in our report, the Congress directed the Corps to revise its procedures for reprogramming of funds starting in fiscal year 2006 to reduce the amount of reprogramming actions that occur and would institute a more rationale financial discipline for the Corps Civil Works appropriations accounts. In all five of the reports discussed here, the Army or the Department of Defense essentially agreed with our findings and conclusions and agreed to take actions to address our recommendations. In some cases, the Corps has completed the actions and in others they are underway or planned. Of note, in 2005, the Corps amended its policy on external review of its Civil Works decision-making documents, including cost and benefit analyses to allow for outside review in certain cases. Specifically, according to the Corps’ revised policy, external peer review of such documents will take place where the “risk and magnitude of the proposed project are such that a critical examination by a qualified person or team outside of the Corps and not involved in the day-to-day production of a technical product is necessary.” In addition, the Corps has reported that it has undertaken a number of other improvements, including (1) updating and clarifying its project study planning guidance, (2) establishing communities of practice to foster technical competence and share knowledge among individuals who have a common functional skill, and (3) reorganizing to foster integrated teamwork and streamline the project review and approval process. In closing, Mr. Chairman, we have found that the Corps’ track record for providing reliable information that can be used by decision makers to assess the merits of specific Civil Works projects and for managing its appropriations for approved projects is spotty, at best. The recurring themes throughout the five studies that are highlighted in our testimony clearly indicate that the Corps’ planning and project management processes cannot ensure that national priorities are appropriately established across the hundreds of civil works projects that are competing for scarce federal resources. While we are encouraged that the Corps and/or the Congress have addressed or are in the process of addressing many of the issues we have identified relating to these individual projects, we remain concerned about the extent to which these problems are systemic in nature and therefore prevalent throughout the Corps’ Civil Works portfolio. Effectively addressing these issues may therefore require a more global and comprehensive revamping of the Corps’ planning and project management processes rather than a piecemeal approach. This concludes my prepared statement, Mr. Chairman. I would be happy to respond to any question that you or Members of the Subcommittee may have. For further information on this testimony, please contact Anu Mittal at (202) 512-3841 or mittala@gao.gov. Individuals making contributions to this testimony included Ed Zadjura, Assistant Director. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Through the Civil Works Program, the Corps of Engineers (Corps) constructs, operates, and maintains thousands of civil works projects across the United States. The Corps uses a two-phase study process to help inform congressional decision makers about civil works projects and determine if they warrant federal investment. As part of the process for deciding to proceed with a project, the Corps analyzes and documents that the costs of constructing a project are outweighed by the benefits. To conduct activities within its civil works portfolio, the Corps received over $5 billion annually for fiscal years 2005 and 2006. During the last 4 years, GAO has issued five reports relating to the Corps' Civil Works Program. Four of these reports focused on the planning studies for specific Corps' projects or actions, which included a review of the cost and benefit analyses used to support the project decisions. The fifth report focused on the Corps management of its civil works appropriation accounts. For this statement, GAO was asked to summarize the key themes from these five studies. GAO made recommendations in the five reports cited in this testimony. The Corps generally agreed with and has taken or is taking corrective action to respond to these recommendations. GAO is not making new recommendations in this testimony. GAO's recent reviews of four Corps civil works projects and actions found that the planning studies conducted by the Corps to support these activities were fraught with errors, mistakes, and miscalculations, and used invalid assumptions and outdated data. Generally, GAO found that the Corps' studies understated costs and overstated benefits, and therefore did not provide a reasonable basis for decision-making. For example: (1) for the Delaware Deepening Project, GAO found credible support for only about $13.3 million a year in project benefits compared with the $40.1 million a year claimed in the Corps' analysis; (2) for the Oregon Inlet Jetty Project, GAO's analysis determined that if the Corps had incorporated more current data into its analysis, benefits would have been reduced by about 90 percent; and (3) similarly, for the Sacramento Flood Control Project, GAO determined that the Corps overstated the number of properties protected by about 20 percent and used an inappropriate methodology to calculate the value of these protected properties. In addition, the Corps' three-tiered internal review process did not detect the problems GAO uncovered during its reviews of these analyses, raising concerns about the adequacy of the Corps' internal reviews. The agency agreed with GAO's findings in each of the four reviews. For three projects the Corps has completed a reanalysis to correct errors or is in the process of doing so; it decided not to proceed with the fourth project. GAO's review of how the Corps manages its appropriations for the civil works program found that instead of an effective and fiscally prudent financial planning, management, and priority-setting system, the Corps relies on reprogramming funds as needed. While this just-in-time reprogramming approach can provide funds rapidly to projects that have unexpected needs, it has also resulted in many unnecessary and uncoordinated movements of funds, sometimes for reasons that were inconsistent with the Corps' own guidance. Because reprogramming has become the normal way of doing business at the Corps, it has increased the Corps' administrative burden for processing and tracking such a large number of fund movements. For example, in fiscal years 2003 through 2004 the Corps moved over $2.1 billion through over 7,000 reprogramming actions. In response to GAO's findings, the Congress directed the Corps to revise its procedures for managing its civil works appropriations, starting in fiscal year 2006, to reduce the number of reprogramming actions and institute more rational financial discipline for the program.
Even before Executive Order 12898 was issued in 1994, EPA took steps to address environmental justice. For example, in 1992, it established the Office of Environmental Equity, which is now known as the Office of Environmental Justice, to focus on environmental pollution affecting racial minorities and low-income communities, but this office has no specific role in rulemaking. In 1993, EPA created the National Environmental Justice Advisory Committee to provide independent advice and recommendations to the Administrator on environmental justice matters. The 1994 executive order stated that EPA and other federal agencies, to the extent practicable and permitted by law, shall make achieving environmental justice part of their missions by identifying and addressing, as appropriate, the disproportionately high and adverse human health or environmental effects of their programs, policies, and activities on minority populations and low-income populations in the United States. The executive order does not create a right to sue the government or seek any judicial remedy for an agency’s failure to comply with the order. After the issuance of the executive order, EPA took additional steps to identify and address environmental justice. Among other things, in 1994, the Administrator issued guidance for the rulemaking process suggesting that environmental justice be considered early in the rulemaking process. In 1995, EPA issued an Environmental Justice Strategy that included, among other things, (1) ensuring that environmental justice is incorporated into the agency’s regulatory process, (2) continuing to develop human exposure data through model development, and (3) enhancing public participation in agency decision making. In 2001, the Administrator issued a memorandum defining environmental justice more broadly to mean “the fair treatment of people of all races, cultures, and incomes, with respect to the development, implementation, and enforcement of environmental laws and policies, and their meaningful involvement in the decision making processes of the government.” In 2004, EPA developed new guidance for rulemaking that, like its earlier 1994 guidance, suggested that environmental justice be considered early in the rulemaking process. Under the Clean Air Act, EPA, along with state and local government units and other entities, regulates air emissions of various substances that harm human health. According to EPA data, from 1995 though 2004, emissions of certain air pollutants have declined from 15 percent to as much 31 percent, as shown in table 1. In addition, EPA sets primary national ambient air quality standards for six principal pollutants that harm human health and the environment. These standards are to be set at a level that protects human health with an adequate margin of safety, which, according to EPA, includes protecting sensitive populations, such as the elderly and people with respiratory or circulatory problems. These six pollutants include the five types of emissions listed in table 1, along with ozone, which is not emitted directly but is formed when nitrogen oxides and volatile organic compounds react in the presence of sunlight. According to EPA, in 2003, about 161 million people (about 56 percent of the population) lived in areas where the concentration of ozone met the standard; about 120 million people (41 percent) lived in areas where the concentration of particulate matter met EPA’s standard; and about 168 million people (58 percent) lived in areas where the concentrations of the other four pollutants met the standards. EPA has a multistage process for developing clean air and other rules that it considers high priority (the top two of three priority levels) because of the expected involvement of the Administrator, among other factors. Initially, a workgroup chair is chosen from the lead program office, such as the Office of Air and Radiation (Air Office) in the case of clean air rulemaking. The workgroup chair assigns the rule one of the three priority levels, and EPA’s top management makes a final determination of the rule’s priority. The priority level assigned depends on such factors as the level of the Administrator’s involvement and whether more than one office in the agency is involved. The gasoline, diesel, and ozone implementation rules were classified as high-priority rules on the basis of these factors. In addition, these rules were considered significant because they had an effect of $100 million or more a year on the economy, or they raised novel legal or policy issues and, therefore, were required under Executive Order 12866 to be sent to OMB. Among other things, an OMB review is conducted to ensure that the rule is consistent with federal laws and the President’s priorities, including executive orders. EPA guidance identifies environmental justice as one of many factors to be considered early in the rulemaking process. In 1994, the EPA Administrator established guidance for rulemaking and identified 11 characteristics for “quality actions” in rulemaking. Among these characteristics were (1) consistency with legal requirements and national policies, which would include Executive Order 12898, and (2) adherence to the Administrator’s seven priorities, which included environmental justice. According to the guidance, managers must consider all 11 areas early on and be explicit about any trade-offs made among them. For high-priority rules, the workgroup chair is responsible for, among other things, ensuring that work gets done and the process is documented. Other workgroup members are assigned from the lead program office and, in the case of the two highest priority rules, from other offices. The workgroup may conduct such activities as (1) collaborating to prepare a plan for developing the rule, (2) seeking early input from senior management, (3) consulting with stakeholders, (4) collecting data and analyzing issues, (5) considering various options, and (6) recommending usually one option to managers. In addition, an economist (who typically participates in the workgroup) prepares an economic review of the proposed rule’s costs to society. According to EPA, the “ultimate purpose” of an economic review is to inform decision makers of the social welfare consequences of the rule. Finally, after the approval of all relevant offices within EPA, the proposed rule is published in the Federal Register, the public is invited to comment on it, and EPA considers the comments. Comments may address any aspect of the proposed rule, including whether environmental justice issues are raised and appropriately addressed in the proposed rule. Sometimes, prior to the publication of the proposed rule, EPA publishes an Advanced Notice of Proposed Rulemaking in the Federal Register. The notice provides an opportunity for interested stakeholders to provide input to EPA early in the process, and the agency takes such comments into account to an appropriate extent, according to EPA. In finalizing a rule, EPA is required to provide a response to all significant public comments, including those on environmental justice, and to prepare a final economic review. After these tasks are completed, the rule, if it is significant, is sent to OMB for approval. Once OMB approves the final rule and the Administrator signs it, it is published in the Federal Register. After a specified time period, the rule goes into effect. Within EPA, the Air Office is primarily responsible for implementing the Clean Air Act, as amended. Within that office, the Office of Air Quality Planning and Standards is primarily responsible for developing the majority of new rules for stationary sources resulting from the act. Also within the Air Office, the Office of Transportation and Air Quality has primary responsibility for developing rules and other programs to control mobile source emissions. The Office of Environmental Justice, located within EPA’s Office of Enforcement and Compliance Assurance, provides a central point for the agency to address environmental and human health concerns in minority communities and/or low-income communities—a segment of the population that has been disproportionately exposed to environmental harms and risks, according to the office’s Web site. The office works with EPA’s program and regional offices to ensure that the agency considers environmental justice. Although EPA guidance calls for environmental justice to be considered early in the rulemaking process, we found that EPA generally devoted little attention to environmental justice during the drafting of the three rules as proposed. First, environmental justice was not mentioned in an initial form used to flag potential issues for senior management. Second, it is unclear how much the workgroups discussed environmental justice because EPA officials had differing recollections on the matter. Even when the workgroups did discuss environmental justice, their ability to identify potential problems may have been limited by a lack of training and guidance, among other factors. Third, the economic reviews of two of the three proposed rules did not discuss environmental justice. Finally, when the proposed rules were published in the Federal Register and made available for public comment, all three mentioned environmental justice, but the discussion was contradictory in one case. Although EPA guidance suggested that environmental justice was one of the factors that should be considered early in rulemaking, it did not include information on environmental justice in a key form prepared for management at the beginning of the process. After being designated, the workgroup chair is to complete a “tiering form” to help establish the level of senior management involvement needed in drafting the rule. For example, the highest priority rules would involve the Administrator and more than one office in the agency. The forms for the gasoline, diesel, and ozone implementation rules stated that these rules were of the highest priority. In addition, the form asks a series of questions, the answers to which are to be used to alert senior managers to potential issues related to compliance with statutes, executive orders, and other matters. This form specifically asks about, among other things, the rules’ potential to pose disproportionate environmental health risks to children and to have potential Endangered Species Act implications. However, the form does not include a question regarding the rules’ potential to create environmental justice concerns. Moreover, on the forms that were completed for the three rules we reviewed, we found no mention of environmental justice. EPA officials had differing recollections about the extent to which the three workgroups considered environmental justice. The chairs of the workgroups for the two mobile source rules told us that they did not recall any specific time when they considered environmental justice during the rules’ drafting, but other EPA officials said environmental justice was considered. The chair of the ozone workgroup told us that his group did consider environmental justice, but that he could not provide any specifics about this. Because 3 to 7 years have passed since these workgroups were formed and the workgroup members may not have remembered discussions of environmental justice during the rules’ drafting, we asked them to provide us with any documentation that may have indicated that environmental justice was considered. Members of the two mobile source workgroups told us that they did not have any such documents. The chair of the ozone workgroup provided us with a copy of a document, prepared by the workgroup, which identified issues needing analysis. The document stated that information would be developed for an economic review related to the proposed rule, and that such information would be used in part to support compliance with executive orders, including one related to low-income and minority populations. Even when the workgroups stated that they had considered environmental justice, we identified three factors that may have limited their ability to identify potential environmental justice concerns. First, all three workgroup chairs told us that they received no guidance in how to analyze environmental justice concerns in rulemaking. Second, workgroup members had received little, if any, training on environmental justice. Specifically, all three workgroup chairs told us they received no training in environmental justice. Two chairs did not know whether other members of the workgroups had received any training, and a third chair said at least one member had. Some EPA officials involved in developing these three rules told us that it would have been useful to have a better understanding of the definition of environmental justice and how to consider environmental justice issues in rulemaking. Finally, the Air Office’s environmental justice coordinators, whose full-time responsibility is promoting environmental justice, were not involved in drafting any of the three rules. Neither of the two coordinators we spoke with (the overall coordinator for the Air Office and the coordinator for the unit within the Air Office that prepared the rules) could recall being involved in drafting any of the three rules. Further, the Air Office’s environmental justice coordinators said they rarely served as part of a workgroup for air rulemaking or received questions from a workgroup during the development of any rule under the Clean Air Act. EPA is required under the Clean Air Act, other statutes, and executive orders to prepare an economic review for proposed rules, and the type of economic review to be prepared depends on the rule’s impact on the economy. Specifically, rules that are expected to have an effect of $100 million or more a year—like the two mobile source rules—require a more detailed “economic analysis.” Other rules—like the ozone implementation rule—still must conduct a less detailed “economic impact assessment.” According to EPA, the “ultimate purpose” of these reviews is to inform decision makers of the social consequences of the rules. According to EPA guidance, both types of review are to discuss the rule’s cost and the distribution of those costs across society. According to EPA officials, both types of review consider environmental justice. The more detailed reviews, or economic analyses, also are to discuss the rule’s benefits and equity effects, which include environmental justice. For all three rules, an economic review of their economic costs and certain other features was prepared for decision makers before the proposed rules were published. However, the economic analyses of the two mobile source rules did not include an analysis of environmental justice. The supervisor of the economists who prepared the analyses said that environmental justice was not discussed in the analyses due to an oversight. However, he also said (and a senior policy advisor in the Air Office concurred) that EPA has not agreed upon the complete list of data that would be needed to perform an environmental justice analysis. Further, he said that EPA does not have a model with the ability to distinguish localized adverse impacts for a specific community or population. Although the economic impact assessment of the ozone implementation rule did discuss environmental justice, it inconsistently portrayed some information relevant to the rule’s potential environmental justice impacts. Specifically, the assessment stated that EPA determined the rule would not create environmental justice issues, based on its analysis of the 1997 rule that established the 8-hour ozone national ambient air quality standard. However, the earlier rule referred to its economic review, which stated it was not possible to rigorously consider the potential environmental justice effects of the rule because the states were responsible for its implementation. The inability of EPA to rigorously consider environmental justice in the 1997 rule does not seem to support EPA’s statement that there were no environmental justice issues raised by the ozone implementation rule. Also, the economic impact assessment did not address the potential environmental justice effects of a certain provision, which EPA stated 2 months later, in the proposed rule, might raise environmental justice issues. The provision would attempt to reduce vehicle use generally throughout a large metropolitan area by encouraging mixed-use growth—a combination of industrial, retail, and residential development—in portions of that metropolitan area, so transportation would be concentrated there. According to EPA, concentrating vehicle emissions and stationary emissions might create environmental justice concerns for low-income residents. According to EPA’s director of regulatory management, the agency did not have any guidance on whether environmental justice should be included in the preamble of a rule at the time the gasoline and diesel rules were developed. By the time the ozone implementation rule was proposed, EPA had developed guidance, which is still in place today. While this guidance indicates that environmental justice and seven other executive orders should be considered when a new rule is developed, it does not state that officials must include a discussion of environmental justice in the proposed rule. Specifically, the guidance provides that five orders should be discussed in all rules, and that three other orders—including the order relating to environmental justice—may be discussed if necessary and appropriate. (Table 2 contains a list of these executive orders.) EPA officials told us that a discussion of environmental justice was made optional under the guidance because it is infrequently identified by EPA as an issue. The publication of a proposed rule gives EPA an opportunity to explain how it considered environmental justice in the rule’s development. Although all three rules mentioned environmental justice when they were published in the Federal Register, they differed in the extent to which they discussed this issue and, in one case, the discussion appeared contradictory. In the proposed gasoline rule, EPA stated that environmental justice is an important economic dimension to consider, but it did not describe whether it was considered or whether the proposed rule raised any environmental justice issues. In the proposed diesel rule, in a section on environmental justice, EPA stated that the rule would improve air quality across the country and could be expected to mitigate environmental justice concerns about concentrations of diesel emissions. More particularly, EPA stated that health benefits could be expected for populations near bus terminals and commercial distribution centers, where diesel truck traffic would be concentrated, because pollutants in diesel emissions would be reduced. The treatment of environmental justice in the proposed ozone implementation rule was unclear because two sections of the rule appeared to contradict each other. In one section, EPA stated that it did not believe the rule would raise any environmental justice issues, but in another section, it specifically invited comments on an option to concentrate commercial, industrial, and residential growth, which it said “may raise environmental justice concerns.” In all three cases, EPA received and generally responded to public comments on environmental justice, although in one case it did not explain the basis for its response. In addition, in all three cases, it completed a final economic review, but these reviews generally did not provide decision makers with an environmental justice analysis. EPA published all three final rules, and EPA officials told us that they believed that these rules did not create an environmental justice issue. In Clean Air Act rulemaking, EPA is required to allow the submission of public comments, and the final rule must be accompanied by a response to each significant comment. These comments are generally submitted during the official public comment period after a rule is proposed, but they may be submitted while EPA is drafting a proposed rule. The act also requires EPA to place written comments in a public docket. In addition, according to EPA’s public involvement policy, agency officials should explain, in their response to comments, how they considered the comments, including any change in the rule or the reason the agency did not make any changes. Commenters from the petroleum industry, environmental groups, and elsewhere stated that the proposed gasoline rule raised environmental justice concerns. For example, one commenter representing environmental justice groups stated that the proposed rule was “completely devoid of environmental justice analysis,” and that the national benefits of the rule were derived from transferring broadly distributed emissions into areas around refineries. Also, a representative of a petroleum company stated that EPA needed to address environmental justice issues. EPA responded by taking two actions. It (1) analyzed the rule’s potential impact on communities around refineries and (2) sought stakeholders’ views on environmental justice and other issues relating to refinery emissions. First, EPA estimated how two types of refinery and vehicle emissions would change, as a result of the rule, in 86 U.S. counties that contained a refinery. The two types of emissions—nitrogen oxides and volatile organic compounds—contribute to the formation of ground-level ozone, which is regulated under the Clean Air Act because it is harmful to human health. EPA estimated that the increase in refinery emissions could be greater than the decrease in vehicle emissions, resulting in a net increase in emissions of one or both substances, in 26 counties (about 30 percent of the total), as shown in table 3. Specifically, it estimated that emissions of both substances could increase in 10 counties, with a population of about 13 million people, and that emissions of only one substance would increase in another 16 counties. On the other hand, EPA estimated that emissions of both substances could decrease in 60 counties. For example, EPA estimated that in Plaquemines Parish, Louisiana, net emissions of nitrogen oxides could increase 298 tons as a result of the rule, reflecting an increase in refinery emissions of 356 tons and a decrease in vehicle emissions of 58 tons. Conversely, it estimated that in Calcasieu Parish, emissions of volatile organic compounds could decrease by 61 tons, reflecting an increase in refinery emissions of 84 tons and a decrease in vehicle emissions of 145 tons. The results of EPA’s analysis appear to support those commenters who asserted that the rule might create environmental justice issues in some localities. They also appear to conflict with EPA’s statements, in its summary of and response to comments document, that “it would be unacceptable to trade the health of refining communities in exchange for generalized air pollution benefits. However we do not believe the Tier 2/gasoline sulfur control rule will cause such an exchange.” EPA also stated that, for the “vast majority” of areas near refineries, the benefits of reduced emissions from vehicles would “far outweigh” any increase in refinery emissions. When asked whether this analysis appeared to confirm concerns about the rule’s potential environmental justice impacts, EPA officials told us that the analysis was limited and overstated the net increase in refinery emissions in two ways. First, according to EPA officials, the analysis did not consider the actions that refiners would likely take to offset increases in emissions because of the new rule; EPA assumed that they would seek to reduce emissions in other ways to avoid additional regulation at the state level. EPA said it believed these actions would limit the expected increases in refining emissions. Second, EPA analyzed the effect of the rule only for 2007. EPA officials said they believed that the benefits of the rule would increase after that year, as new (and cleaner) vehicles increasingly replaced older (and less clean) vehicles. We note two other ways in which the analysis was limited in estimating the potential effects on communities near refineries. First, EPA did not ask refiners about the rule’s impact on their output of these two emissions, nor did EPA perform an analysis to determine how the rule would impact individual refiners’ emissions of these two substances. Instead, EPA assumed that emissions would increase by the same proportion at each refinery—nitrogen oxides, by 4.5 percent, and volatile organic compounds, by 3.32 percent—although individual refineries increases could be lesser or greater than these percentages. Secondly, EPA did not estimate the rules’ impact on other pollutants, such as particulate matter and sulfur dioxide, which might also increase as a result of the increase in refining activity needed to comply with the rule. EPA did not make the results of its analysis available to the public, either in the economic review of the final rule or elsewhere in the docket, because EPA officials told us they considered the results of the analysis too uncertain to release to the public. However, EPA officials told us that the analysis—along with their assumption that refineries were likely to emit less emissions than the analysis indicated—supported their belief that the rule would be unlikely to cause environmental justice impacts. In addition, these officials said they believed that, if the rule did create environmental justice issues, they could be best addressed by the state or local governments. This is because any refiners needing to increase their emissions to comply with the gasoline rule would have to submit specific plans to such governments during the permitting process. Second, EPA believed that environmental justice issues would be best addressed during the permitting process, and EPA hired a contractor to solicit stakeholders’ potential concerns about this issue. In September 1999, the contractor interviewed individuals from EPA, environmental organizations, the oil refining industry, and state agencies responsible for regulating refinery emissions to ascertain their views. In December 1999, the contractor again sought stakeholders’ views, focusing largely on local environmental groups, because few of them were interviewed in September. In December, local environmental groups stated that they did not trust the state environmental agencies, and that they perceived that EPA had “talked exclusively with industry representatives prior to developing the proposed rule, but not to the local environmental organizations.” In addition, these groups said that they did not want “any added emissions to their air, even if there will be a net benefit to the nation’s environment.” In response to the stakeholders’ concerns, the contractor recommended that EPA develop permitting teams, provide information about the rule, and enhance community involvement. The contractor said that these recommendations would improve the permitting process for all stakeholders by addressing issues specific to each permit, potentially including environmental justice. EPA said that it would implement the contractor’s recommendations for improving the permitting process to deal with environmental justice issues. EPA stated that it believed that environmental justice issues could be dealt with during the permitting process at the state or local level, and officials told us that EPA has limited direct authority over permitting because most permitting occurs at the state level. Several groups commented that the states, not EPA, “act as the permitting authorities” over refineries. EPA said it agreed that states generally have primary authority over permitting. Further, Executive Order 12898 does not apply to state or local permitting authorities, and absent specific state or local law, state and local governments have no obligation to consider environmental justice when issuing permits. In response to an Advanced Notice of Proposed Rulemaking, several commenters expressed concern that the diesel rule would lead to increased refinery emissions of regulated pollutants. They specifically stated that EPA should address the potential for increased emissions in its economic analysis of the rule. EPA did not respond to these comments and did not factor the potential increase in regulated pollutants into its final economic analysis. In commenting on the proposed rule, several petroleum companies stated that changes they would need to make to comply with the rule might increase emissions and, therefore, lead citizens to raise environmental justice issues. EPA responded that it did not believe that complaints would delay the refineries’ permitting applications. However, EPA did not analyze the rule for environmental justice impacts, such as increases in air emissions in communities surrounding refineries. EPA officials told us that they did not perform such an analysis because they believed that they had sufficiently analyzed these issues in the context of the gasoline rule. In the proposed rule on implementing the ozone standard, EPA asked for public comments on potential environmental justice issues stemming from a specific provision that would have encouraged concentrated growth in urban areas to reduce the number of commuter vehicles contributing to ozone emissions. Seven public commenters stated that the provision could have potential environmental justice impacts. However, these comments on environmental justice did not relate to the provisions of the ozone implementation rule that have, thus far, been finalized, and therefore it was not necessary for EPA to respond to these comments. According to an EPA official, EPA is still considering the provision, and the public comments on it, for a second phase of the rule implementing a new ground-level ozone standard that EPA intends to finalize this year. After taking into consideration public comments, the agency prepares a final economic review. EPA guidance indicates that this final economic review, like the proposed economic review, should identify the distribution of the rule’s social costs across society. After considering public comments, EPA did prepare a final economic review for all three rules, but, for two of the three rules, environmental justice was not discussed. Even after the public expressed concerns about environmental justice, the final economic analysis of the gasoline rule, like the analysis of the proposed rule, did not discuss environmental justice. According to the supervisory economist, not discussing environmental justice in the final analysis was an oversight. Similarly, the final economic analysis of the diesel rule, like the analysis of the proposed rule, did not discuss environmental justice. Again, according to the supervisory economist, not discussing environmental justice in the final analysis was an oversight. As a result, EPA did not incorporate the public’s suggestions that EPA include the cost of increased refinery emissions in its economic analysis. For the ozone implementation rule, EPA did not prepare a new economic impact assessment for its final version. Instead, it issued an addendum to the proposed assessment and stated that it considered the addendum and the proposed assessment to constitute a final economic impact assessment. In addition, because EPA decided to finalize the ozone implementation rule in two phases, the addendum addressed only the part of the rule that was finalized, not the entire proposed rule. Thus, the assessment of the final rule did not change the conclusion of the assessment of the proposed rule, namely that the ozone implementation rule did not create any environmental justice issues. The publication of a final rule gives EPA another opportunity to explain how it considered environmental justice in the rule’s development. For all three rules, EPA discussed environmental justice. The preamble to one rule stated explicitly that it would not create an environmental justice issue. The other two rules did not explicitly state whether they would create an environmental justice issue, although the preambles to both rules discussed the mitigation of potential environmental justice effects. EPA officials told us that they believed that these rules did not create an environmental justice issue. In the preamble to the final ozone implementation rule, as in the proposed rule, EPA stated that the rule did not raise any environmental justice issues. The agency supported its statement by explaining that the rule was implementing a standard, developed in 1997, that had already taken environmental justice into account. In the preamble to the final gasoline rule in 2000, EPA stated that areas around the refineries would receive an environmental benefit from the rule, and that emissions at some refineries might increase even after installing equipment to comply with emissions controls in the Clean Air Act. It concluded that the increases in refinery emissions would be very small in proportion to the decreases in vehicle emissions in the areas around refineries. Moreover, EPA discussed its previous actions to consider environmental justice concerns, as previously discussed, and stated that it was committed to resolve environmental justice issues if they arose, through additional outreach efforts to local communities and similar means. Although the final rule did not state explicitly whether it would or would not ultimately create an environmental justice issue, EPA officials told us in late 2004 that, in their opinion, the rule did not create such an issue. Lastly, in the preamble to the final diesel rule in 2001, EPA stated that the rule could mitigate some of the environmental justice concerns pertaining to the heavy-duty diesel engines that often power city buses. The final rule does not discuss any potential environmental justice issues pertaining to impacts from increased refinery emissions on nearby communities, even though EPA officials told us that they recognized increased refinery emissions could have such impacts. Nevertheless, EPA officials told us in late 2004 that they believed the rule did not create environmental justice issues. We found some evidence that EPA officials considered environmental justice when drafting or finalizing the three clean air rules we examined. During the drafting of the three rules, even when the workgroups discussed environmental justice, their capability to identify potential concerns may have been limited by a lack of guidance, training, and involvement of EPA’s environmental justice coordinators. It is important that EPA thoroughly consider environmental justice because the states and other entities, which generally have the primary permitting authority, are not subject to Executive Order 12898. EPA’s capability to identify environmental justice concerns through economic reviews also appears to be limited. More than 10 years have elapsed since the executive order directed federal agencies, to the extent practicable and permitted by law, to identify and address the disproportionately high and adverse human health or environmental effects of their programs, policies, and activities. However, EPA apparently does not have sufficient data and modeling techniques to be able to distinguish localized adverse impacts for a specific community. For example, EPA has not agreed upon the complete list of data that would be needed to perform an environmental justice analysis. This suggests that, although EPA has developed general guidance for considering environmental justice, it has not established specific modeling techniques for assessing the potential environmental justice implications of any clean air rules. In addition, by not including a discussion of environmental justice in all of the economic reviews, EPA decision makers may not have been fully informed about the environmental justice impacts of all the rules. Finally, even though members of the public commented about two rules’ potential to increase refinery emissions—potential environmental justice issues, (1) in one case, EPA did not provide a response and (2) in the other case, it did not explain the basis for its response, such as the rationale for its beliefs and the data on which it based its beliefs. While these may not have been significant comments requiring a response, EPA’s public involvement policy calls for EPA to provide responses when feasible, and this policy does not appear to distinguish comments on Advanced Notices of Proposed Rulemaking from comments on proposed rules. In order to ensure that environmental justice issues are adequately identified and considered when clean air rules are being drafted and finalized, we recommend that the EPA Administrator take the following four actions: ensure that the workgroups devote attention to environmental justice while drafting and finalizing clean air rules; enhance the workgroups’ ability to identify potential environmental justice issues through such steps as (1) providing workgroup members with guidance and training to help them identify potential environmental justice problems and (2) involving environmental justice coordinators in the workgroups when appropriate; improve assessments of potential environmental justice impacts in economic reviews by identifying the data and developing the modeling techniques that are needed to assess such impacts; and direct cognizant officials to respond fully, when feasible, to public comments on environmental justice, for example, by better explaining the rationale for EPA’s beliefs and by providing its supporting data. EPA’s Assistant Administrator for Air and Radiation provided comments on a draft of this report in a letter dated June 10, 2005 (see app. IV). In addition, he provided technical comments that we incorporated where appropriate. First, EPA expressed the view that its rules have resulted in better air quality nationally. EPA said it was “disappointed” that we did not accurately reflect its progress in achieving environmental justice with respect to air pollution. It noted that the three rules are part of a larger program that is making significant progress in providing cleaner air nationwide. Second, EPA stated that in examining the agency’s process for considering environmental justice, we asked the wrong question, and that we should have focused on the outcome of the rulemaking process—the rules themselves. Finally, it stated that our evidence of how it considered environmental justice during the development of the three final rules did not support our conclusions and recommendations, and it provided detailed information about the efforts it took relating to environmental justice for the three final rules. We question the relevance of the information provided on air quality nationally and disagree with EPA’s other two points. First, in addition to the data we had already presented on the decrease in emissions of certain air pollutants, EPA provided data on overall improvements in air quality, specifically the decrease in the number of areas throughout the nation that did not meet certain ambient air quality standards. However, because these data provide no detail on the conditions facing specific groups—for example, residents of areas near refineries, who might be negatively affected by the two mobile source rules—these data are not necessarily germane to environmental justice. Although Executive Order 12898 calls on agencies to identify and address the disproportionately high and adverse effects of its programs, policies, and activities on specific groups, EPA provided no information about such groups. Also, we believe that EPA’s statement about the effect of clean air rules on national air quality at some level misses the point. Second, EPA suggested that it would have been more appropriate for us to look at the outcomes of its efforts than at the process that produced the outcomes. We agree with EPA that outcomes are important, but it is not yet clear whether the rules we examined will address environmental justice issues effectively because the rules are being implemented over the next several years. It is also important to examine the process that led to the rules—as we did. The various process steps are intended to help ensure that EPA’s activities during the many phases of drafting and finalizing all rules are efficiently and effectively focused on achieving the desired outcomes. Third, although EPA stated that our evidence did not support our conclusions and recommendations, it did not challenge the accuracy of the information we provided on how it considered environmental justice during the many phases of developing the three final rules discussed in the body of our report and the three proposed rules discussed in appendix II. While it provided detailed information on certain activities and the rationale for undertaking them, our report already discussed nearly all of these activities. For example, EPA noted at length its efforts, after drafting the gasoline rule, to hold discussions with environmental justice and other groups on issues relating to permits that refiners would need if they increased their emissions to comply with the rule. We already acknowledged these efforts in our report. However, EPA’s efforts at this stage do not mitigate the fact that it devoted little attention to environmental justice up to that point, nor the fact that discussions with affected groups, while beneficial, do not offset the effects of possible increases in refinery emissions on these groups. EPA is essentially relying on state and local governments to deal with environmental justice concerns as they implement the gasoline and diesel rules at the refinery level, even though the executive order does not apply to state or local governments, and, absent specific state or local law, they have no obligation to consider environmental justice when issuing permits. In addition, the three final rules were selected in part because they mentioned environmental justice and should have showcased EPA’s efforts to consider environmental justice. Thus, we continue to believe that the evidence we provided supports our conclusions and recommendations. Finally, aside from its general statement that the evidence we presented does not support our conclusions and recommendations, EPA generally did not respond to our four recommendations. We continue to believe that all of them are still warranted. With respect to our recommendation that workgroups devote attention to environmental justice while developing clean air rules, EPA stated that it “devoted appropriate attention to environmental justice issues” in the three final rules. EPA’s guidance suggests that environmental justice be considered both at the beginning of process (when the rules are drafted) and at the end of the process (when they are finalized). However, nearly all of the attention EPA described came at the end of the process—after receiving public comments. EPA responded in part to our recommendation on the need to provide guidance and training to workgroup members and the need to involve environmental justice coordinators. EPA did not provide any information that would refute the finding on the lack of guidance and training, for example, by bringing to our attention any guidance or training that it provides to workgroup members. However, EPA noted that an environmental justice coordinator “was heavily involved” in one of the three final rules and became an “ad hoc member” of the workgroup for the gasoline rule “around the time the rule was proposed.” From EPA’s comment, it is clear that the coordinator became involved only at the end of the process of drafting this rule (i.e., “around the time the rule was proposed”). Further, EPA did not mention whether a coordinator was involved at all in the other two final rules, nor in the three proposed rules. EPA did not comment specifically on our recommendation on the need to improve assessments of potential environmental justice impacts in economic reviews or provide any information that would refute the finding that led to it. EPA responded in part to our recommendation on the need to respond fully, when feasible, to public comments on environmental justice. Specifically, it noted that it did not respond to comments on the Advanced Notice of Proposed Rulemaking on the diesel rule, and that it is has no legal or policy obligation to respond to comments on an Advanced Notice of Proposed Rulemaking. Although we understood that EPA’s public involvement policy calls for the agency to include a response to all comments when feasible, we revised our report to reflect EPA’s comment that it had no obligation in such instances. As arranged with your office, we plan no further distribution of this report until 15 days after the date of this letter, unless you publicly announce its contents earlier. At that time, we will send copies of this report to interested congressional committees and the EPA Administrator. We will make copies available to others upon request. This report will also be available at no cost on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix V. This rule is designed to significantly reduce the emissions from new passenger cars and light trucks, including pickup trucks, vans, minivans, and sport-utility vehicles, to provide for cleaner air and greater public health protection. This rule reduces particulate matter and nitrogen oxides emissions from heavy-duty engines by 90 percent and 95 percent below current standard levels, respectively, to decrease health impacts caused by diesel emissions. To provide certainty to states and tribes regarding classifications for the 8-hour national ambient air quality standards (NAAQS) and their continued obligations with respect to existing requirements. This rule treats vehicles and fuels as a system, combining requirements for cleaner vehicles with requirements for lower levels of sulfur in gasoline. Under this rule, a heavy-duty vehicle and its fuel are regulated as a single system, combining requirements for new heavy-duty engines to meet more stringent emission standards and reductions in the level of sulfur allowable in highway diesel fuel. This rule addresses the following topics: classifications for the 8-hour NAAQS; revocation of the 1-hour NAAQS; how antibacksliding principles will ensure continued progress toward attainment of the 8-hour ozone NAAQS; attainment dates; and the timing of emissions reductions needed for attainment. Because of substantial congressional interest, we are including information about how the Environmental Protection Agency (EPA) considered environmental justice during the drafting of three additional proposed clean air rules, up through their publication in the Federal Register. The three proposed rules we reviewed were as follows: The December 2002 New Source Review proposed rule, which proposed a change in the category of activities that would be considered routine maintenance, repair, and replacement under the New Source Review Program. The January 2004 mercury proposed rule, which proposed two methods for regulating mercury emissions from certain power plants. The January 2004 proposed Clean Air Interstate Rule (interstate rule), which, among other things, proposed a requirement that 29 states and the District of Columbia revise their state plans to include control measures limiting emissions of sulfur dioxide and nitrogen oxides. When we completed our initial fieldwork, these rules had not been finalized. Since then, the mercury and interstate rules have been finalized and a portion of the New Source Review rule has been finalized. Additional detail on these rules is provided in table 4. EPA officials told us that they did not consider environmental justice while drafting two of these three proposed rules. Moreover, in our analysis of these rules’ economic reviews, we found no discussion of environmental justice for two of the three rules. Finally, when published in the Federal Register, none of the proposed rules discussed environmental justice. The three workgroup chairs provided initial reports to senior management in tiering forms to help establish the level of senior management involvement needed in developing the rule. In these initial reports, all three proposed rules were classified as top priority. The forms were to be used to alert senior managers to potential issues related to compliance with statutes, executive orders, and other matters. Environmental justice was not a specific element on the form at the time, and the reports for the three rules did not discuss environmental justice. The chair of the New Source Review workgroup said his group did not consider and address environmental justice early in the development process because the rule was to be applied nationally and was prospective in nature. The chair of the interstate rule workgroup said his group conducted no environmental justice analysis. Finally, the chair for the mercury workgroup said his group considered environmental justice in drafting the proposed rule, but he provided no details about how it was considered. Workgroup members’ ability to identify potential environmental justice concerns may have been limited by a lack of guidance, training, and involvement by environmental justice coordinators. Specifically, all three chairs said that their workgroups did not receive guidance for how to consider environmental justice when analyzing the rules. Furthermore, while the mercury workgroup chair said that he had received training on environmental justice, the other two chairs said they had received no such training. All three chairs said they did not know whether other members in their workgroups had received environmental justice training. Also, all three chairs said that environmental justice coordinators did not assist their workgroup. EPA prepared an economic analysis for all three rules. Among these economic analyses, only the review for the New Source Review rule stated that environmental justice was unlikely to be a problem because the potential for disproportionate effects generally occurs as a result of decisions on siting new facilities, and EPA noted that this rule dealt exclusively with existing facilities. The analysis for the mercury rule did not discuss environmental justice. The analysis stated that—due to technical, time, and other resource limitations—EPA was unable to model the changes in mercury emissions that might result from the rule. However, EPA stated that to the extent mercury emissions do have adverse health effects, the proposed rule would reduce emissions and subsequent exposures of people living near power plants. The analysis for the interstate rule did not discuss environmental justice. It was not discussed, according to the supervisor for economists in the Office of Air and Radiation, because the rule was expected to provide nationwide benefits and because EPA lacked the data and modeling capability to predict how regulated entities will react to the requirements of the rule. We found no discussion of environmental justice in any of the three rules, as they were published in the Federal Register. Neither Executive Order 12898 nor EPA guidance requires a discussion of environmental justice in proposed rules. According to EPA officials, such a discussion was not necessary for these three rules because they did not believe the rules would have any environmental justice impacts. To determine how EPA considered environmental justice when developing significant rules under the Clean Air Act, as amended, we reviewed an EPA database of clean air rules finalized during fiscal years 2000 through 2004. We assured ourselves that the database was reliable for our purposes. Rules are considered significant and sent to the Office of Management and Budget for review if their expected annual costs or benefits exceed $100 million; they raise novel legal or policy issues; or they may interfere with actions undertaken by another federal agency or state, local, or tribal governments. In addition, rules that involve the Administrator or an interoffice review are considered high priority within EPA. We identified 19 clean air rules EPA finalized in our time period that were considered significant and a high priority. We then reviewed the 19 rules in the Federal Register to identify those rules that mentioned the terms “environmental justice” or “Executive Order 12898” and found 3 rules that mentioned one or both terms. The 16 rules that did not mention environmental justice included rules relating both to mobile sources, such as a rule to control the emissions of air pollution from nonroad diesel engines and fuels, and rules relating to stationary sources, such as a final rule to establish a national emission standard for hazardous air pollutants at iron and steel foundries. We focused on the three rules that mentioned environmental justice because we believed they were more likely to demonstrate how EPA considered this issue in clean air rulemaking. To determine how EPA considered environmental justice as it drafted and finalized clean air rules, we reviewed EPA documents and interviewed EPA officials, including workgroup leaders. To characterize how or whether EPA’s economic reviews for the rules considered environmental justice, we analyzed both the preliminary and final economic reviews for each rule and interviewed the supervisor of the economists who developed the reviews. To determine whether the public raised environmental justice concerns in commenting on proposed rules and how EPA addressed those comments, we reviewed EPA documents, such as the agency’s summaries of comments and responses, and the final rules as published in the Federal Register. We conducted our work between July 2004 and May 2005 in accordance with generally accepted government auditing standards. The following are our comments on the Environmental Protection Agency’s letter dated June 10, 2005. 1. We disagree with EPA’s assertion that the Air Office paid appropriate attention to environmental justice issues. We found that EPA devoted little attention to environmental justice in four phases of drafting the rules and considered environmental justice to varying degrees in the three phases of finalizing them. EPA provided virtually no new information on its activities during these phases. 2. EPA was referring to our report entitled Clean Air Act: EPA Has Completed Most of the Actions Required by the 1990 Amendments, but Many Were Completed Late, GAO-05-613 (Washington, D.C.: May 27, 2005). 3. As we stated, several public commenters said that the ozone implementation rule, as proposed in June 2003, could have potential environmental justice impacts. As we also stated, in April 2004, EPA finalized a portion of the ozone implementation rule, which it then called Phase I; but it did not include the provision that drew the public comments on environmental justice. EPA officials are still considering this provision for a second phase of the rule implementing a new ground-level ozone standard, called Phase II. It is true, as EPA stated, that we did not identify any environmental justice issues in the Phase I rule. However, our objective was not to identify such issues with the rules, but to review how EPA considered environmental justice in developing the rules. 4. On the basis of EPA’s letter, we added clarification about the “seemingly contradictory statements” in our discussion of the ozone implementation rule. 5. As we stated, public commenters did raise such issues about all three rules as they were proposed. As we also stated, EPA did not finalize the portion of the ozone implementation rule that it, and others, said could raise environmental justice issues. 6. While EPA stated that our report is misleading and needs further explanation of context, it is not clear from EPA’s comments how the agency would want us to frame this issue differently. First, EPA comments that EPA staff believed that, as a factual matter, as the rule was implemented, it was unlikely to pose environmental justice issues. Similarly, we state in the report that EPA officials believed that the final rules did not create environmental justice issues. Second, EPA stated that we should note the steps that the agency took to address potential environmental justice concerns. We did so, noting EPA’s discussion of these steps in the final rule. Moreover, in its letter, EPA stated that it agreed with us that the gasoline rule (finalized in February 2000) would create “potential environmental justice issues.” It was public commenters, not we, who raised concerns about potential environmental justice issues. 7. We clarified in the Highlights page and other portions of the report to note that EPA officials told us, after the rules were finalized, that none of the rules created an environmental justice issue. 8. We clarified the source of EPA’s statements. The preamble of the final rule is discussed in our report. 9. According to EPA, we stated that the Air Office’s environmental justice coordinators were not involved in the gasoline rulemaking. In fact, we stated only that the coordinators were not involved in developing the rule, as opposed to public outreach efforts, where they were involved. EPA’s description of how and when a coordinator was involved buttressed our point. According to EPA’s letter, the environmental justice coordinator was involved only in resolving “permitting process issues” and became involved only “around the time the rule was proposed.” Similarly, according to EPA’s letter, the Office of Environmental Justice representative was involved only in discussions of “permitting issues” and only “after the proposed rule was published.” Thus, it appears that in neither case were they substantively involved in drafting this rule. We added language in the report clarifying the discussion of the process. 10. As EPA noted, it devoted resources to seeking public involvement while finalizing the gasoline rule. Accordingly, we changed our characterization of EPA’s efforts in finalizing the three rules. 11. EPA’s public involvement policy provides that it will, to the fullest extent possible, respond to public comments. We did not see a distinction in the policy between comments on Advanced Notices of Proposed Rulemaking and comments on proposed rulemakings. However, EPA interprets its policy as requiring a response to comments on the latter but not the former, and we have revised our report accordingly. In addition to the individual named above, the key contributors to this report were John Delicath, Michael A. Kaufman, David Marwick, Thomas Melito, and Daniel J. Semick. Tim Guinane, Anne Rhodes-Kline, and Amy Webbink also made important contributions.
Executive Order 12898 made achieving "environmental justice" part of the mission of the Environmental Protection Agency (EPA) and other federal agencies. According to EPA, environmental justice involves fair treatment of people of all races, cultures, and incomes. EPA developed guidance for considering environmental justice during the development of rules under the Clean Air Act and other activities. GAO was asked to examine how EPA considered environmental justice during two phases of developing clean air rules: (1) drafting the rule, including activities of the workgroup that considered regulatory options, the economic review of the rule's costs, and making the proposed rule available for public comment, and (2) finalizing the rule, including addressing public comments and revising the economic review. GAO reviewed the three clean air rules described in the next column. When drafting the three clean air rules, EPA generally devoted little attention to environmental justice. While EPA guidance on rulemaking states that workgroups should consider environmental justice early in this process, GAO found that a lack of guidance and training for workgroup members on identifying environmental justice issues may have limited their ability to identify such issues. In addition, while EPA officials stated that economic reviews of proposed rules consider potential environmental justice impacts, the gasoline and diesel rules did not provide decision makers with environmental justice analyses, and EPA has not identified all the types of data necessary to analyze such impacts. Finally, in all three rules, EPA mentioned environmental justice when they were published in proposed form, but the discussion in the ozone implementation rule was contradictory. In finalizing the three clean air rules, EPA considered environmental justice to varying degrees. Public commenters stated that all three rules, as proposed, raised environmental justice issues. In responding to such comments on the gasoline rule, EPA published its belief that the rule would not create such issues, but did not publish the data and assumptions supporting its belief. Specifically, EPA did not publish (1) its estimate that potentially harmful air emissions would increase in 26 of the 86 counties with refineries affected by the rule or (2) its assumption that this estimate overstated the eventual increases in refinery emissions. For the diesel rule, in response to refiners' concerns that their permits could be delayed if environmental justice issues were raised by citizens, EPA stated that the permits would not be delayed by such issues. Moreover, after reviewing the comments, EPA did not change its final economic reviews to discuss the gasoline and diesel rules' potential environmental justice impacts. Finally, the portions of the ozone implementation rule that prompted the comments about environmental justice were not included in the final rule. Overall, EPA officials said that these rules, as published in final form, did not create an environmental justice issue.
The Medicare hospice benefit, authorized in 1982 under part A of the Medicare program, covers medical and palliative care services for terminally ill beneficiaries. A Medicare-certified hospice provides physician services, nursing care, physical and occupational therapy, home health aide services, medical supplies and equipment, and short-term inpatient hospital care for pain control and symptom management. In addition, the hospice benefit provides coverage for several services not generally available under the regular fee-for-service Medicare benefit. These include outpatient prescription drugs for treating pain and other symptoms of the terminal illness, homemaker services, short-term inpatient respite care, and bereavement counseling for the patient’s family. Patients may receive services from freestanding hospice providers or from a hospice program based in a home health agency, hospital, or skilled nursing facility. For each day a beneficiary is enrolled, the hospice provider is paid an all-inclusive, prospectively determined rate, depending on the level of hospice care provided (routine home care, continuous home care, inpatient respite, or general inpatient care).Initial payment rates were based on cost data reported by 26 hospice programs that participated in Medicare’s hospice demonstration project from 1980 to 1982.Since 1993, these rates have been updated by an annual statutory adjustment factor tied to inflation in the hospital market basket (a measure of the cost of goods and services purchased by hospitals nationwide). Eligibility for hospice services requires that the beneficiary’s physician and the hospice medical director (or other physician affiliated with the hospice) certify that the individual’s prognosis is for a life expectancy of 6 months or less, if the terminal illness runs its normal course. Beneficiaries who elect hospice must waive all other Medicare coverage of care related to their terminal illness, although they retain coverage for services unrelated to their terminal illness. A beneficiary can cancel his or her election of hospice benefits at any time and return to regular Medicare, and beneficiaries are free to reselect hospice coverage at a later date. While there are currently no limits on the number of days an individual can receive hospice care, a beneficiary’s prognosis must be reaffirmed at 90 days, at 180 days, and every 60 days thereafter. The hospice eligibility requirement that a beneficiary be certified as having a prognosis of 6 months or less has been an ongoing concern expressed by advocates and providers. The requirement has been challenged as difficult to implement and a deterrent to hospice referrals, especially for beneficiaries with noncancer diagnoses. Research suggests that it can be difficult for physicians to accurately predict whether a patient is likely to die within 6 months.It is particularly difficult to estimate life expectancy for persons with noncancer diagnoses because the course of their disease is likely to be erratic.For example, patients with heart disease are more likely to die suddenly than persons with cancer, who commonly have a period of steady decline before death. Similarly, very elderly people in frail health or with certain chronic illnesses may experience long periods of declining health punctuated by several medical crises—any one of which can be fatal. In such cases, physicians may find it difficult to justify a hospice referral for beneficiaries who appear to be relatively stable, and, as a result, the physicians may delay initiation of hospice services until a medical crisis occurs shortly before death. From 1992 to 1998, the number of Medicare beneficiaries enrolling in hospice more than doubled, with growth in all population subgroups and in all states. Growth was particularly rapid among beneficiaries with diagnoses other than cancer. At the same time, many beneficiaries had shorter stays. On average, the days of hospice service used per beneficiary declined by about one-fifth during the 7-year period and beneficiaries with diagnoses other than cancer experienced the sharpest reductions. Our analysis of Medicare claims data indicates substantial growth in hospice use. The number of beneficiaries electing hospice care increased 2 ½ times from 1992 to 1998, from about 143,000 to nearly 360,000 persons annually. (See fig. 1.) Across most demographic groups, the use of hospice services has grown at a relatively consistent rate. Thus, hospice users today are similar to users in 1992; the distribution of enrollees by race has not changed (89 percent are white), and the proportion of enrollees who are women has climbed only slightly (from 50 to 54 percent). However, the use of hospice services grew more rapidly among beneficiaries aged 80 and older than it did among younger beneficiaries. This age group now makes up 47 percent of Medicare hospice enrollees, up from 35 percent in 1992. Overall, 19 percent of Medicare beneficiaries who died in 1998 received hospice services, compared with 8 percent in 1992.However, this measure understates the proportion of Medicare beneficiaries who choose hospice care among those for whom the benefit was intended. According to a former president of the National Hospice Organization, “when the number of deaths nationwide is adjusted to reflect only those that are likely to be appropriate for hospice care, the percentage of dying patients cared for in hospice care is probably about 40 percent.” Some groups of beneficiaries are more likely to choose hospice services than others. For example, 20 percent of white Medicare beneficiaries who died in 1998 elected hospice services, compared with 15 percent of black beneficiaries who died that year. Similarly, the use of hospice services is more common among beneficiaries who are enrolled in Medicare health maintenance organizations (HMO) at the end of life than among those in fee-for-service plans. Of the beneficiaries who died in 1998, 27 percent of those enrolled in an HMO elected hospice, compared with 18 percent of fee-for-service beneficiaries.(See app. II for detailed information about hospice use rates among decedents.) In addition, the proportion of Medicare decedents who used the hospice benefit varies widely by state. For example, in 1998, the number of hospice users as a share of Medicare decedents was more than four times higher in Arizona than in Maine. Table 1 shows states with the highest and lowest rates of hospice use in 1998. Although people who die from cancer are more likely to choose hospice services than are those who die from other conditions, the use of hospice services by beneficiaries with noncancer diagnoses has increased rapidly. From 1992 to 1998, hospice enrollment by beneficiaries with cancer increased 91 percent, while enrollment by beneficiaries with all other conditions increased 338 percent. The most dramatic growth in use was among individuals with other terminal conditions, such as heart disease, lung disease, stroke, or Alzheimer’s disease. About 43 percent of beneficiaries who elected hospice in 1998 had noncancer diagnoses, compared with about 24 percent in 1992. Table 2 shows the distribution of new hospice enrollees by primary diagnosis. For many of the leading causes of death, the proportion of elderly decedents who use the hospice benefit has increased. In 1997, about half of the people aged 65 and older who died from cancer had used hospice services, compared with about one-fourth in 1992.This pattern generally held for breast cancer, lung cancer, and prostate cancer. However, the use of hospice services is even more common among persons with other types of cancer; roughly 75 percent of people age 65 and older who died from brain or liver cancer in 1997 used hospice services before death. The proportion of elderly decedents who used hospice services also increased among beneficiaries who died from other causes. Table 3 shows the change in hospice use rates from 1992 to 1997 for common hospice diagnoses. Although more Medicare beneficiaries are receiving hospice services, on average, they are receiving fewer days of care than did beneficiaries in the past.From 1992 to 1998, average length of stay declined 20 percent (from 74 to 59 days), while median length of stay declined 27 percent (from 26 to 19 days).(See fig. 2.) The overall decline in average length of service appears to have been driven by both (1) a reduction in the proportion of beneficiaries with very long hospice stays and (2) an increase in the share of users with very short stays.(See table 4.) From 1992 to 1998, the share of hospice enrollees with more than 6 months of service use declined from 9.3 to 7.3 percent. Over the same period, the proportion of beneficiaries who used hospice for a very brief period before death rose sharply. In 1998, 28 percent of all beneficiaries using hospice care did so for 1 week or less. The decline in the average number of hospice days used has been especially dramatic among beneficiaries with a primary diagnosis other than cancer. While these beneficiaries historically had many more days of care than cancer patients, the average number of days used declined 38 percent between 1992 and 1998. In comparison, average days used by beneficiaries diagnosed with cancer declined by 14 percent. As a result, differences in length of stay across diagnosis categories have narrowed considerably. In 1998, cancer patients used an average of 54 days while noncancer patients used 68 days, on average. Figure 3 compares the decline in the average number of hospice days used for beneficiaries with cancer and noncancer diagnoses. At the state level, average length of service declined in 42 of 50 states and the District of Columbia from 1992 to 1998, and variation in average length of service across states lessened considerably. (State data appear in app. II.) In 1992, 27 states had average service periods within 10 days of the 74- day national average. By 1998, 36 states had average service periods within 10 days of the 59-day average. Several factors influence beneficiary choice about whether and when to use hospice care. These include physician preferences and referral practices, individual patient choice and circumstances, and general awareness of the benefit among the public and professional communities. In addition, recent federal oversight of compliance with patient eligibility requirements may have affected certain beneficiaries’ use of the hospice benefit. Although Medicare beneficiaries and their families make the decision about whether and when to initiate hospice services, physician willingness to discuss options for end-of-life care is important to the decision. However, the research literature indicates that not all physicians are comfortable discussing end-of-life care, and some may hesitate to suggest hospice care for other reasons, such as concerns about relinquishing control of their patients’ care. Even when the issue has been broached, some beneficiaries choose instead to continue curative or life-extending treatments. Patient advocacy groups, several medical societies, and others have called for greater public and professional awareness of options for care of the dying, which has led to a range of educational efforts designed to increase awareness of hospice and its benefits. The research literature indicates that because patients and their families rely heavily on physician recommendations for treatment, including recommendations for end-of-life care, physicians are an influential factor in patient entry into hospice. Physicians initiate most referrals to hospice, and they may continue to care for their patients after enrollment as part of the hospice team. However, research has shown that many physicians are poorly trained in the care of the dying and are often uncomfortable discussing options for end-of-life care or the cessation of curative treatment.A recent review of 50 top-selling textbooks from several medical specialties found that most provided inadequate information on end-of-life issues, with oncology textbooks among those particularly likely to provide no information about key aspects of end-of-life care.Physician referral to hospice may be limited by other factors, as well. For example, experts in the area of palliative care, as well as the research literature, suggest that some physicians may not be aware that they can continue to provide services after a beneficiary has entered hospice and may delay referral out of concern about losing control of the patient’s care. The use of hospice services by Medicare beneficiaries requires not just awareness of the benefit and a physician’s certification of prognosis but also acceptance that death is the outcome of their illness and the choice by beneficiaries to give up a portion of their standard Medicare benefits in order to receive hospice care. Once enrolled, no other services related to a patient’s terminal condition are covered under Medicare. HCFA officials and others also noted that improvements in cancer care and the addition of new treatment options may be prompting some beneficiaries to pursue new curative options until very shortly before death, thus contributing to the trend of shorter hospice stays. Other beneficiaries may favor continuing aggressive, life-extending treatments up until the time of death and not enter hospice at all. According to HCFA officials, it may be that some terminal patients do not want hospice care, and that should be their right. Research suggests that beneficiaries who do not consider hospice care may be unwilling to confront the terminal nature of their illness, may not know that the alternative exists, or may misunderstand the services available through hospice care. The Institute of Medicine (IOM) noted that patients are influenced by the general unwillingness to accept limits of all types, including those of aging and death.A Gallup poll in 1996 found that although a majority of people expressed interest in hospice care, most also said that they would still seek curative care. In some cases, a beneficiary’s circumstances may complicate hospice enrollment. Hospice is designed to allow the beneficiary to remain at home during his or her last few weeks of life, where family and friends are expected to deliver most of the routine day-to-day care. Hospice staff offer more specialized care and respite care to give family members a break when they need one. Thus, some hospice programs limit participation to beneficiaries who have a caregiver at home. Others permit such beneficiaries to enter the hospice program with the understanding that transfer to a nursing home will be required when their needs for assistance reach a certain stage. Public and professional awareness also influences the use of the Medicare hospice benefit. The need for greater public and professional understanding of options for end-of-life care, including hospice, has been highlighted in several recent congressional hearings and in other public forums.In addition, several medical societies, patient advocacy groups, and the hospice industry have undertaken a variety of efforts to educate their members and the public about end-of-life care options. For example, the American Medical Association and the Robert Wood Johnson Foundation are developing a core curriculum for educating physicians in end-of-life care. The Medicare Rights Center, a consumer advocacy and education organization, is conducting a national campaign to increase awareness of the Medicare hospice benefit among health professionals. Also, the National Hospice and Palliative Care Organization has published a variety of materials on public education and outreach strategies for its members. Industry and patient advocacy groups contend that recent federal scrutiny of provider compliance with program eligibility requirements has inappropriately limited access to hospice for certain beneficiaries. While federal scrutiny may have contributed somewhat to the existing trend toward shorter hospice enrollment periods, continued growth in the number of beneficiaries receiving hospice services makes it difficult to identify the extent to which federal scrutiny may have deterred access. Furthermore, the use of hospice services has increased most rapidly among beneficiaries with diagnoses other than cancer—those for whom arriving at a 6-month prognosis may be more difficult. In 1995 and 1996, the Department of Health and Human Services’ (HHS) Office of the Inspector General (OIG) investigated the eligibility status of Medicare beneficiaries receiving hospice services as part of a larger investigation of fraud and abuse in Medicare. Specifically, OIG reviewed the admission decisions made for hospice patients with very long stays at 12 hospices in four states; it found that many of these patients did not meet eligibility criteria upon admission to hospice. OIG followed this effort with other reviews of beneficiary eligibility, encompassing a larger sample of hospices, and found that the vast majority of Medicare beneficiaries receiving hospice services were eligible for such services. Patient advocacy groups and the hospice industry assert that this federal scrutiny of compliance with the 6-month eligibility rule has had a chilling effect on entry into hospice for noncancer beneficiaries, for whom it may be more difficult to establish a 6-month prognosis with confidence. They contend that hospice providers are more cautious about admitting beneficiaries with noncancer diagnoses as a result, leading to delays in hospice entry for persons wishing to use the benefit. Although the percentage increase in beneficiaries electing hospice slowed somewhat from 1995 through 1998 compared with the prior period, it is difficult to know what portion of this slower growth is attributable to the effect of federal scrutiny and what portion is attributable to other factors, such as the larger base of beneficiaries already using hospice. The OIG scrutiny of beneficiary eligibility may have contributed to later hospice entry for some beneficiaries, to the extent that hospice providers responded to the oversight with greater caution about beneficiary eligibility. However, the trend toward fewer days of hospice use began before the period of federal scrutiny. As shown in figure 4, the average length of service for both cancer and noncancer hospice users peaked by 1994, before scrutiny of the hospice benefit increased. Furthermore, physician groups we spoke with did not cite caution among hospice providers about beneficiary eligibility as a primary barrier to the initiation of hospice services for their patients. According to the American Society of Clinical Oncology, barriers to timely hospice care for cancer patients include the attitudes of physicians and patients toward death and reluctance to talk about death until the very end of life. While the OIG reviews were under way, the National Hospice Organization developed guidelines to assist physicians and hospices in determining a 6- month prognosis for patients with selected noncancer diagnoses. These included amyotrophic lateral sclerosis (ALS), dementia, human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS), heart disease, pulmonary disease, liver disease, stroke and coma, and kidney disease. In order to enhance accuracy and uniformity in the claims review process, HCFA distributed these guidelines to the intermediaries that process hospice claims for Medicare to use in assessing compliance with benefit requirements.The intermediaries have since adopted these guidelines as formal local medical review policies. Concerns have been raised that using these guidelines as a standardized basis for determining Medicare hospice eligibility limits access to hospice, particularly for patients with noncancer diagnoses. Industry representatives assert that the guidelines require further development to improve prognostic confidence and accuracy before they would be appropriate as formal medical review policies. However, intermediaries point out that while medical review policies specify clinical criteria for establishing a patient’s 6-month prognosis, they allow for variation in individual cases. For example, one intermediary’s medical review policy for heart disease notes that “some patients may not meet the criteria, yet still be appropriate for hospice care, because of other comorbidities or rapid decline.” According to Medicare program guidance to all hospices, the fact that a hospice patient lives beyond 6 months does not, by itself, constitute grounds for a determination that the patient was never eligible for hospice care or that Medicare does not cover services provided to the patient. Typically, if a question is raised as to whether a patient is terminally ill, the intermediary asks the hospice to furnish information necessary to affirm the patient’s prognosis. The rates of medical reviews of claims began increasing in 1995, at HCFA’s direction.Four of the five intermediaries reported that, by 1999, review rates ranged from 0.8 to 4.2 percent of all hospice claims processed. Nearly all the hospice providers we spoke with said they consult their intermediaries’ medical review policies as part of the admission screening process. Asked about the effect of the review policies on admitting patients, some hospices reported that using these criteria has decreased the likelihood of admitting patients with noncancer conditions, while others said that the review criteria have increased the likelihood of admissions or have had no effect at all. Sustained growth in the number of hospice providers participating in Medicare and in their distribution throughout the country suggests that hospice services are now more widely available to program beneficiaries. While all sectors of the hospice industry have grown over the past decade, recent growth has been particularly strong in the for-profit sector and among large hospice programs. At the same time, hospice industry officials report growing cost pressures from shorter patient stays and changes in the practice of palliative care. Because data on provider costs are not available, however, it is not clear how these cost factors affect providers and beneficiaries. Although the overall rate of growth has slowed somewhat in the past few years, new hospice providers continue to enter the Medicare program every year. As shown in figure 5, the number of Medicare-certified hospice providers nationwide grew by 82 percent from 1,208 in 1992 to 2,196 in 1999.Each year during the period, additional hospice programs became certified for Medicare, although the number of new entrants declined from 274 in 1994 to 46 in 1999, and the number of hospices leaving Medicare exceeded the number of new entrants in 1999. (Many of those leaving were based in home health agencies (HHA) that may have closed because of changes in HHA payments enacted in BBA.) The increased number reflects not only new hospices but also growing participation in Medicare. (See app. II for more detail on changes in hospice supply and distribution.) In 1989, we estimated that only about 35 percent of the approximately 1,700 hospice providers nationwide participated in Medicare. By 1998, the National Hospice and Palliative Care Organization estimated that 80 percent of hospices were certified to serve Medicare patients. Over this period, all types of hospice providers grew, in rural and urban areas and in almost every state. From 1992 to 1999, the rate of growth was greatest among for-profit providers and those in rural areas. Also, large providers accounted for an increasing share of the services delivered. (See table 5.) The number of for-profit providers increased nearly fourfold, and the number of large hospice programs (those serving 500 or more Medicare patients per year) more than tripled over the period.In addition, the number of rural providers increased by 116 percent while the number of urban-based providers increased 64 percent. Even with high growth in these sectors of the industry, the majority of hospices are small programs (with fewer than 100 Medicare patients per year), organized as not-for- profit, and located in urban areas. Even with certificate-of-need (CON) requirements that apply to hospice providers in 14 states, the number and size of Medicare hospice providers increased in almost every state from 1992 to 1998.Among states with large Medicare enrollment and no CON requirements, the most dramatic growth was in Texas, where the supply of hospice providers relative to the size of the Medicare population nearly doubled. Among the CON states with large Medicare populations, providers increased the number of patients served, while growth in the number of providers was constrained. For example, in Florida and New York the number of hospices per million beneficiaries remained virtually unchanged; however, the number of beneficiaries each hospice served grew 66 and 105 percent, respectively. (See table 6.) Even as the hospice industry has grown, changes in the use of the hospice benefit and the delivery of hospice care have raised concerns about cost among providers. Most significantly, declines in the average enrollment period have resulted in fewer days over which providers can spread the fixed costs associated with a patient’s stay in hospice. In addition, providers report that changes in the practice of palliative medicine have made the use of higher-cost services more common. However, because reliable data on provider costs are not available, it is not clear how these factors may effect hospices’ financial status or their ability to serve Medicare beneficiaries. Industry representatives point out several areas of change that they contend are adversely affecting the financial condition of providers. Specifically, Under Medicare’s per diem payment system for hospice care, hospices have traditionally offset the higher-cost days that occur at admission and during the period immediately preceding death with lower-cost days of less intensive care.For example, costs for admitting and assessing a new patient, establishing a care plan, and delivering medical equipment are incurred during the first few days of enrollment and do not vary with the patient’s period of service. As enrollment periods have declined, hospices have had fewer days over which they can spread the higher costs associated with the start and end of a patient’s stay. As more patients enter hospice later in the course of their terminal illness, they enter with higher levels of impairment and in need of more intensive services. In addition, the shift in the mix of patients by diagnosis may have increased the average service needs for the overall hospice population. According to the most recent National Home and Hospice Care Survey, hospice patients with noncancer diagnoses are somewhat more likely than those with cancer to be functionally impaired and thus may require more services on a regular basis from hospice agencies. Physicians and patients are calling on hospice programs to provide a broader array of palliative services than in the past. Costly treatments such as chemotherapy and radiation—traditionally used for curative purposes—are increasingly used in the hospice setting to manage pain and other symptoms. Furthermore, some new palliative care treatment options, such as the transdermal administration of narcotic pain medication, may offer better symptom control for some patients but often at greater expense. To the extent that hospice providers believe that Medicare payments do not adequately cover their costs, they may have an incentive to limit their acceptance of patients who need more intensive services or limit the types and amount of services they make available.Providers may also respond by choosing not to admit patients who are expected to be more expensive. However, hospice officials we interviewed reported being able to enroll most patients who were referred. With the exception of patients lacking sufficient informal caregiver support, the potential cost of care and payment rates were not generally cited as factors limiting the admission of eligible patients. Data to assess how declining patient stays and changes in palliative care practices affect overall provider costs are not currently available. While certain more expensive services may be provided more frequently, the share of total costs that these services currently represent is unknown.Furthermore, we do not know the extent to which providing more expensive medications or treatments to hospice patients may reduce the need for other services such as nursing visits. In response to BBA requirements, HCFA has begun collecting hospice cost data to use in evaluating the adequacy of current levels of Medicare reimbursement. Officials anticipate that audited hospice cost data will be available beginning in late 2001. Trends in the use of the Medicare hospice benefit during the 1990s indicate that beneficiaries with all types of terminal illnesses are making use of hospice services in greater numbers every year. In particular, the types of patients selecting hospice have expanded broadly—from mostly beneficiaries with cancer to a nearly even split among those with cancer and those with other terminal conditions. In spite of these trends in use, and the widespread availability of hospice providers, patient advocates and the industry are concerned about the trend toward using fewer days of hospice care. Because many factors influence the use of hospice care, and potential demand is difficult to determine, the extent to which the Medicare hospice benefit may be underutilized is not clear. We provided a draft of this report to HCFA for review. In its comments, HCFA discussed the importance of the hospice benefit to the Medicare program and efforts to ensure that beneficiaries, physicians, and hospice providers understand the benefits’ coverage and eligibility criteria. Furthermore, HCFA stated that it does not believe the underutilization concerns of hospice advocates and the industry should be discounted. It noted that enrollment in hospice may not be an option for beneficiaries who lack family support at home or that it may be delayed for patients who wish to continue curative care treatments. HCFA’s comments appear in appendix III. The agency made technical comments that we incorporated where appropriate. As we agreed with your office, unless you publicly announce the report’s contents earlier, we plan no further distribution of it until 30 days after the date of this letter. We will then send copies to the Honorable Donna Shalala, Secretary of HHS; the Honorable Min DeParle, Administrator of HCFA; and others who are interested. We will also make copies available to others on request. If you or your staff have any questions, please call me at 202-512-7119 or Rosamond Katz, Assistant Director, at 202-512-7148. Other major contributors were Eric Anderson, Jenny Grover, Wayne Turowski, and Ann White. Our study is an analysis of national hospice enrollment, use patterns, and industry developments from 1992 through 1998. We examined Medicare beneficiary claims data for hospice services to determine hospice use rates for different groups of beneficiaries. We also gathered descriptive information about the hospices that provided the services. We used the Medicare Hospice Standard Analytic File of the Health Care Financing Administration (HCFA) to identify beneficiaries who enrolled in hospice during the study period and to determine their pattern of hospice use. To conduct an analysis of hospice enrollment by year, we assigned beneficiaries to the year of their first hospice claim. We excluded beneficiaries from our analysis if total payment for a beneficiary was less than $75 or was $1 million or more, if a beneficiary at the time of a first hospice claim was younger than 20 or older than 110, or if a beneficiary’s residence was not in one of the 50 states or the District of Columbia. Our analysis of beneficiary use includes information on age at the time of entry into hospice (younger than 65, 65 to 74, 75 to 84, 85 and older), gender, race and ethnicity (white, black, Hispanic, and other), state of residence, enrollment in managed care or fee-for-service Medicare (based on status in the month of death, from the HCFA denominator file), and primary diagnosis (three-digit International Classification of Disease code). Analysis of the beneficiary claims data showed that 98 percent of beneficiaries had only one hospice provider, and 97 percent had only one diagnosis code. Therefore, we conducted all further analysis on the basis of the provider and diagnosis listed in the first hospice claim for each beneficiary. We calculated the period of enrollment by summing the number of days covered by each claim, even if they covered discontinuous periods of service, and excluded duplicate claims.Because records of hospice use are not complete for beneficiaries who entered hospice during the later years of our study period, we adjusted the claimed days of hospice service for 1996 to 1998 to better account for beneficiaries with very long stays.Our adjustment factor was calculated from 1992-95 data on the proportion of total beneficiary claim days accounted for within the first 2 calendar years of hospice use. We also described hospice use rates among different groups of Medicare decedents. To calculate the rate of hospice use, we identified the number of Medicare decedents each calendar year who had used hospice before death. We used the HCFA Denominator File to identify all Medicare decedents belonging to each demographic group in our analysis.Because the HCFA Denominator File does not contain information about beneficiary diagnosis, we used Centers for Disease Control and Prevention (CDC) mortality data to determine the number of deaths among people aged 65 and older. We used annual Medicare Provider of Service Files to identify hospice characteristics. These files contain data on provider certification and status, such as facility and service characteristics, provider type, and location. We included hospices that received total Medicare payments of $75 or more during our study period. The provider identification number from the first hospice claim for each beneficiary was matched with the Provider of Service file data available for that provider. We characterized providers by type of control (profit, not-for-profit, or government), affiliation (freestanding, hospital-based, home health agency, and skilled state, urban or rural location, and number of Medicare beneficiaries receiving services from each hospice each year (small defined as fewer than 100 beneficiaries, medium defined as 100 to 499 beneficiaries, and large defined as 500 or more beneficiaries). The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
Pursuant to a congressional request, GAO provided information on the Medicare hospice benefit, focusing on: (1) the patterns and trends in hospice use by Medicare beneficiaries; (2) factors that affect the use of the hospice benefit; and (3) the availability of hospice providers to serve the needs of Medicare beneficiaries. GAO noted that: (1) the number of Medicare beneficiaries choosing hospice services has increased substantially; (2) in 1998, nearly 360,000 Medicare beneficiaries enrolled in a hospice program, more than twice the number who elected hospice in 1992; (3) of Medicare beneficiaries who died in 1998, about one in five used the hospice benefit, but use varies considerably across the states; (4) although cancer patients account for more than half of Medicare hospice patients, growth in use has been particularly strong among individuals with other common diagnoses such as heart disease, lung disease, stroke, and Alzheimer's disease; (5) although more beneficiaries are choosing hospice, many are doing so closer to the time of death; (6) the average period of hospice use declined from 74 days in 1992 to 59 days in 1998; (7) half of Medicare hospice users now receive care for 19 or fewer days, and care for 1 week or less is common; (8) many factors influence the use of the Medicare hospice benefit; (9) decisions about whether and when to use hospice depend on physician preferences and practices, patient choice and circumstances, and public and professional awareness of the benefit; (10) along with these factors, increases in federal scrutiny of compliance with program eligibility requirements may have contributed to a decline in the average number of days of hospice care that beneficiaries use; (11) the growth in the number of Medicare hospice providers in both urban and rural areas and in almost every state suggests that hospice services are more widely available to program beneficiaries than in the past; (12) between 1992 and 1999, the number of hospices participating in Medicare increased 82 percent, with large providers and those in the for-profit sector accounting for a greater proportion of the services delivered; (13) at the same time, hospice industry officials report cost pressures from declining patient enrollment periods and increased use of more expensive forms of palliative care; (14) because reliable data on provider costs are not available, however, the effect of these reported cost pressures on the overall financial condition of hospice providers is uncertain; and (15) as required by the Balanced Budget Act of 1997, the Health Care Financing Administration began collecting information in 1999 from hospice providers about their costs to allow a reevaluation of the Medicare hospice payment rate.
In recent years, DOD has been undergoing a transformation that has been described as the most comprehensive restructuring of U.S. military forces overseas since the end of the Korean War. The realignment is to improve the U.S. military’s flexibility to address conventional and terrorist threats worldwide. As part of this restructuring, DOD created new bases in Central Asia and Eastern Europe, downsized the U.S. presence in Germany, and realigned forces in South Korea and Japan. In 2004, the United States and Japan began a series of sustained security consultations aimed at strengthening the U.S.-Japan security alliance to better address the rapidly changing global security environment. The resulting U.S.-Japan Defense Policy Review Initiative established a framework for the future of the U.S. force structure in Japan and is to facilitate a continuing presence for U.S. forces in the Pacific theater by relocating units to other areas, including Guam. As a result of this and other DOD realignments planned on Guam, the total military and related infrastructure buildup is estimated to increase Guam’s current population of 171,000 by an estimated 25,000 active duty military personnel and dependents. The population could swell further because these estimates do not include DOD civilians, contractors, or transient personnel from a Navy aircraft carrier that is planned to conduct periodic visits to Guam in the future. The total cost of all services’ realignments on Guam is estimated to be more than $13 billion, although additional costs are anticipated for other DOD activities and the local Guam community. Realignment costs for the Marine Corps move from Okinawa are to be shared by the United States and Japan. DOD uses military construction appropriations to plan, design, construct, alter, and improve military facilities worldwide. The military construction budget submission for fiscal year 2009 includes approximately $24.4 billion for military construction and family housing, of which nearly $1.1 billion (4.7 percent) is designated for specific overseas locations. Most of these funds are to enhance and support enduring installations, rather than for new or emerging requirements outside existing basing structures. In 2003, the Senate Appropriations Committee expressed concern about the use of military construction budget authority for projects at bases that may become obsolete because of force realignments. Consequently, in Senate Report 108-82, the Senate Appropriations Committee directed DOD to prepare detailed, comprehensive master plans for the changing infrastructure requirements at U.S. military facilities in each of its overseas regional commands. According to the Senate report, at a minimum, the plans were to identify precise facility requirements, the status of properties being returned to host nations, funding requirements, and the respective cost-sharing responsibilities of the United States and the host nations. The Senate report also directed DOD to provide a report to congressional defense committees on the plans’ status and implementation with each yearly military construction budget request. The Senate report directed us to provide the congressional defense committees an annual assessment of the plans. Subsequently, the conference report accompanying the fiscal year 2004 military construction appropriation bill directed that DOD update its overseas master plans annually through fiscal year 2009. The Under Secretary of Defense for Acquisition, Technology and Logistics responded to these congressional reporting requirements and assigned the overseas regional combatant commands responsibility for preparing comprehensive master plans for their respective areas of responsibility. U.S. Pacific Command is responsible for DOD activities in East Asia and South Asia; U.S. European Command is responsible for DOD activities in Eastern and Western Europe; and U.S. Central Command is responsible for DOD activities in the Middle East and Central Asia. In February 2007, the President directed the Secretary of Defense to establish a new geographic combatant command to consolidate the responsibility for DOD activities in Africa that have been shared by U.S. Pacific Command, U.S. European Command, and U.S. Central Command (see fig. 1). U.S. Africa Command was officially established on October 1, 2007, with a goal to reach full operational capability as a separate, independent geographic combatant command by September 30, 2008. DOD officials said that U.S. Africa Command will issue a plan for its area of responsibility next year. In 2004, the U.S. Secretaries of State and Defense and the Japanese Minister of Foreign Affairs and Minister of State for Defense began a series of sustained security consultations aimed at strengthening the U.S.-Japan security alliance and addressing the changing global security environment. The resulting U.S.-Japan Defense Policy Review Initiative established a framework for the future of the U.S. force structure in Japan designed to reduce the U.S. military’s burden on Japanese communities and create a continuing presence for U.S. forces in the Pacific theater. The initiative’s goal of moving about 8,000 Marines and 9,000 dependents from Okinawa to Guam by 2014 is one of several current proposals to build up military forces and infrastructure on Guam. In addition to the initiative, the Navy plans to enhance its infrastructure, logistics capabilities, and waterfront facilities to support transient nuclear aircraft carriers, combat logistics force ships, submarines, surface combatants, and high-speed transport ships at the Naval Base Guam. The Air Force plans to develop a global intelligence, surveillance, and reconnaissance strike hub at Andersen Air Force Base in Guam by hosting various types of aircraft, such as fighters, bombers, tankers, and Global Hawk systems, on a permanent and rotational basis. The Army plans to place a ballistic missile defense task force on Guam. U.S. Pacific Command was responsible for the initial planning for the movement of forces to Guam. In August 2006, OSD directed the Navy to establish the Joint Guam Program Office to facilitate, manage, and execute requirements associated with the rebasing of U.S. assets from Okinawa, Japan, to Guam. Specifically, the office was tasked to lead the coordinated planning efforts of all of DOD’s components and other stakeholders to consolidate, optimize, and integrate the existing military infrastructure on Guam. In addition, the office is to integrate the operational support requirements; develop, program, and synchronize the services’ respective realignment budgets; oversee military construction; and coordinate government and business activities. The office is also expected to work closely with Congress, U.S. agencies, the government of Guam, and the government of Japan to manage this effort and develop a master plan. As initiatives for expanding the U.S. military presence on Guam began to emerge, the Senate Appropriations Committee noted the ambitiousness of the military construction program and the need for a well-developed master plan to efficiently use the available land and infrastructure. The Senate report accompanying the fiscal year 2007 military construction appropriation bill directed DOD to submit a master plan for the military buildup in Guam by December 29, 2006. The Senate report also directed us to review DOD’s master planning effort for Guam as part of our annual review of DOD’s overseas master plans. The conference report accompanying the fiscal year 2008 military construction appropriation bill extended the due date for the Guam master plan to September 15, 2008. We previously reported that while DOD’s overseas master plans generally exceeded the reporting requirements established by Congress, opportunities existed for the plans to provide more complete, clear, and consistent information and to present a more definitive picture of future requirements. In our 2007 report on DOD’s overseas master plans, we suggested that Congress consider requiring the Secretary of Defense to ensure that the overseas master plans include information on residual value compensation and training limitations for U.S. Pacific Command, which are discussed later in this report. We also suggested that Congress consider requiring the Secretary of Defense to report periodically to the defense committees on the status of the department’s planning efforts for Guam to help ensure the best application of federal funds and leveraging of options for supporting the military buildup until DOD finalizes a comprehensive master plan. In our May 2008 testimony on the Guam military buildup master planning effort, we reported that while DOD had established a framework for the military buildup on Guam, many key decisions remain and both DOD and the government of Guam faced significant challenges. We also reported that Guam’s efforts to address infrastructure challenges caused by the buildup were in their initial stages and that existing uncertainties contributed to the difficulties in developing precise plans. The fiscal year 2009 master plans generally reflect recent changes in the U.S. overseas defense basing strategies and requirements and current challenges that DOD faces in implementation. The plans also reflect DOD’s responses to the recommendations we made in our previous reports except that the U.S. Pacific Command plan does not provide the status of the Air Force’s training challenges in South Korea, despite our prior recommendation that it should describe the challenges and their potential effects on infrastructure and funding requirements. DOD officials said that since last year South Korea and the U.S. Air Force have taken steps to address these training challenges. In addition, DOD has submitted the plans to Congress several months after the annual budget submissions even though the Senate and conference reports accompanying the fiscal year 2004 military construction appropriation bill directed DOD to provide updates of the master plans with its military construction budget submissions. Without timely access to the plans, the congressional defense committees may not have the information needed at the appropriate time to prepare the annual defense and military construction legislation and to carry out their oversight responsibilities. The fiscal year 2009 master plans incorporated recent changes associated with the continuing evolution of U.S. overseas basing strategies and requirements. Generally, major force structure realignments that were discussed in the fiscal year 2009 master plans had already been mentioned last year. However, for fiscal year 2009, several changes identified in the overseas master plans included updated information involving realignment initiatives in South Korea and Japan, DOD’s efforts to establish missile defense sites in the Czech Republic and Poland, and the ongoing development of U.S. Africa Command. The U.S. Pacific Command plan discussed the progress of realignment initiatives, which will relocate military personnel and facilities in Japan and South Korea. Specifically, the command reported that the U.S.-Japan Defense Policy Review Initiative has served as an effective framework to manage alliance transformation and realignments in Japan and that planning and execution efforts are ongoing to achieve one of the largest changes in recent history to U.S. force posture in the Pacific. Also, as part of the initiative, the command described the importance of relocating 8,000 Marines from Okinawa to Guam and of consolidating the remaining U.S. Marine Corps presence in Okinawa to reduce the impact on local communities. It also included information on U.S. Forces Japan’s efforts to return to the government of Japan U.S. facilities and more than 14,000 acres of land on Japan and Okinawa. Also, U.S. Pacific Command updated the status of the U.S.-South Korea Land Partnership Plan and the Yongsan Relocation Plan, including its efforts to reduce major U.S. installations from 41 to 10 (76 percent) in South Korea. The command also provided information regarding almost 3,000 acres of land acquisitions, including the expansion of Army Garrison Humphreys (formerly known as Camp Humphreys) and other sites. The U.S. European Command plan updated the network of forward operating sites and cooperative security locations in Eastern Europe. For example, the plan provided details on the mission, planned capabilities, equipment and aircraft, population, and facility requirements for Novo Selo Training Area in Bulgaria and Mihail Kogalniceanu Air Base in Romania. It also described recent efforts to proceed with formal negotiations with the governments of Poland and the Czech Republic on establishing missile defense sites and facility requirements to support this effort. For example, it identified over $284 million in facility requirements to support the ballistic missile defense program in the Czech Republic. U.S. European Command also explained the establishment of U.S. Africa Command and that its future mission is to conduct military-to-military programs, military-sponsored activities, and other operations. The U.S. Central Command plan reflected a long-term planning vision for the development of required infrastructure in the region to achieve its missions. The command also reported a need for an increase in both U.S. military construction and host nation support funding. For example, the command identified a goal of $1.7 billion in host nation funding, which it considered reasonable since the infrastructure may also be used by the host nation. Also, the command’s plan provides detailed descriptions of each forward operating site by providing information on its mission (such as providing logistical support), the units it could host, and its role in the region (such as supporting the war against terrorism or strengthening capabilities for rapid and flexible response in the Central Asian states), as well as identifying the requirements for equipment and facilities at the site. This year’s master plans discuss a number of challenges, such as uncertainties with host nation relations and environmental concerns, which DOD faces in the implementation of the plans. They also provide more detailed descriptions of these challenges than prior years’ plans. All of the regional commands describe to varying degrees the status of recent negotiations and agreements with host nations in their fiscal year 2009 master plans. In our review of the overseas master plans in 2005, we found that none of the commands fully explained the (1) status of or (2) challenges to finalizing host nation agreements and recommended that the commands briefly explain the status of negotiations with host nations to provide more complete and clearer plans. These agreements depend largely on the political environment and economic conditions in host nations and can affect the extent of host nation support—access to facilities or funding—to U.S. forces. Accordingly, the resulting agreements may increase or decrease U.S.-funded costs for future infrastructure changes. For example, this year: The U.S. Pacific Command plan updates information on the results of the U.S.-Japan Defense Policy Review Initiative. The plan describes the planned arrival of the USS George Washington, a nuclear aircraft carrier, at Naval Base Yokosuka to replace the USS Kitty Hawk, a conventional aircraft carrier. The plan also describes how the funding for the Japanese Facilities Improvement Program, historically the source of major construction on U.S. facilities in Japan, has been decreasing. For example, the command noted that the funding for this program has decreased from an estimated $1 billion in 1993 to $242 million. U.S. Forces Japan anticipates that the government of Japan will continue to reduce these funds because of Japan’s commitment to provide several other forms of host nation support (i.e., utilities and Japanese labor force) and funding for the U.S.-Japan Defense Policy Review Initiative under which the Marine Corps forces are moving from Okinawa to Guam. Several DOD officials believe that these financial commitments and other constraints may result in U.S. facilities in Japan receiving less host nation support, which in turn would require more financial support from the U.S. government than in the past. In addition, the U.S. Pacific Command plan provided details on current realignment efforts regarding the delayed move from Yongsan Army Garrison in Seoul to Army Garrison Humphreys south of Seoul. Originally expected to be completed by December 2008, the plan stated that the move may not be completed until 2012. According to the plan, early challenges with land procurement and bilateral funding negotiations have now been overcome and the relocation is moving forward. The plan also recognized that any future constraints on host nation funding or U.S. military construction funds could further delay the Yongsan Relocation Plan. The U.S. European Command plan provided a status of ongoing realignments in Europe. It also described the rationale for the realignments and listed the facilities returned to the host nations. Specifically, the plan provided information on efforts to return installations in Germany, the United Kingdom, Belgium, Turkey, and several classified locations in Europe. It further reported that while supporting operations in Iraq and Afghanistan, U.S. Army Europe returned nearly 20,000 soldiers and their families, including parts of the 1st Infantry Division, back to the United States. U.S. Army Europe has also prepared three military communities in Wuerzburg, Hanau, and Dermstadt for return to the government of Germany. Also, the plan discussed the relocation of U.S. Army Europe headquarters from Heidelberg to Wiesbaden, Germany, to become the 7th Army deployable headquarters by fiscal year 2012. The plan also discussed the Army’s efforts to keep U.S. Army Garrison Baumholder as an enduring base because without it the five other Army main operating bases (i.e., Grafenwoehr/Vilseck/Hohenfels complex, Stuttgart, Ansbach, Kaiserslautern, and Wiesbaden) in Germany would be filled beyond capacity. It also explained how U.S. European Command’s transformation depends on host nation negotiations, political- military considerations, base realignment and closure in the United States, and fiscal limitations. The U.S. Central Command plan discussed efforts to solicit contributions from host nations and to obtain the coordination and support that are needed from DOD, the Department of State, and host nations. It discussed the challenges of ongoing operations in Iraq and Afghanistan and the command’s intention to sustain long-term access to locations across its area of responsibility. The plan described how ongoing operations in Iraq and Afghanistan have increased the basing footprint by using contingency construction funding, although the command expects to work with DOD and Congress to transition from using contingency funding to support its sites. For the future, the command will focus on transitioning from current contingency operations to developing plans for more a fixed posture, in terms of forces and infrastructure. Most of the commands addressed the extent of their environmental challenges in this year’s master plans. In contrast, during our review of the overseas master plans in 2005, none of the commands identified environmental remediation and restoration issues. This year, U.S. Pacific Command provided information on the removal of underground storage tanks with host nation funding on U.S. installations in various locations in South Korea. Also, U.S. Forces Korea identified one base that was closed for which environmental information had been exchanged; however, the command was still in the process of returning the base to the government of South Korea. This year, U.S. European Command included information on the progress of the environmental cleanup of contaminated sites at Rhein Main Air Base, Germany. The command identified that some sites had been cleaned, others needed further investigation, but all investigations are expected to be completed at the earliest by the end of 2012. Because there were no environmental issues in the command’s area of responsibility, according to a command official, U.S. Central Command did not report any environmental issues. Over the years, OSD has modified its guidance for preparing the overseas master plans in an effort to address our prior recommendations related to the following topics: Facility requirements and costs. This year, all of the regional commands identified their precise facility requirements and costs for fiscal years 2009 through 2014, and reported estimated facility sustainment and recapitalization costs for fiscal year 2009. Base categories. This year, all of the commands categorized their installations into applicable base categories of main operating base, forward operating site, and cooperative security location, which provided users a clearer picture of the infrastructure plans and requirements at these sites. The commands also supplemented the information on base categories with detailed data on the installations’ capabilities, overall mission, population, and types of equipment and facilities located at each site. End state date. This year, all of the commands identified a common strategic end state date, which identifies the last fiscal year of the construction time frame and thus provides users a more complete and clearer basis for tracking progress in meeting the command infrastructure objectives for their areas of responsibility. Host nation funding levels. This year, all of the commands reported host nation funding levels at the project level for fiscal year 2009 and at the aggregate level for fiscal years 2010 through 2014, which provided users a better basis to determine the extent to which U.S. funding is needed for facility requirements. Effects of other defense activities. This year, all of the commands described the effects of other defense activities on implementation of their master plans. For example, both U.S. European Command and U.S. Central Command described how the development of U.S. Africa Command would affect their commands and the increased need to coordinate efforts in the future. Until this year, the overseas master plans have not discussed residual value even though we have recommended that they should. In response to this recommendation, OSD and command officials stated that residual value could not be readily predicted and therefore should not be assumed in the master plans. These officials also reported that residual value is based on the reuse of property being turned over to the host nation, which is limited for most categories of military facilities and is often reduced by actual or anticipated environmental remediation costs. However, we have always maintained that since these issues vary by host nation and may not be clear to all users of the plans, OSD should require commands, at a minimum, to explain the issues with obtaining residual value in each host nation and report the implications for U.S. funding requirements. This year, U.S. European Command described the difficult and lengthy process to return and negotiate the value of facilities to address our prior recommendation. The command noted that attempting to forecast residual value would not be prudent fiscal planning because of the uncertainties in receiving residual value, such as the negotiated price to be paid. After we received the U.S. European Command plan, command officials provided data showing that the U.S. government has received approximately $656 million in residual value and payment-in-kind compensation since 1989. Payment-in-kind projects include installation of water, sewer, electrical, and communication lines, and quality of life projects, such as dormitories and neighborhood renovations. While the overseas master plans have continued to evolve and have provided more comprehensive data every year since fiscal year 2006, the U.S. Pacific Command master plan does not describe the challenges the command faces in addressing the U.S. Air Force’s training limitations in South Korea even though we have recommended that it should describe the challenges and their potential effects on infrastructure and funding requirements. While DOD officials indicated that the Air Force’s training conditions have improved on the Korean peninsula, this information was not provided in the U.S. Pacific Command’s plan. For several years, the government of South Korea has attempted to relocate the Koon-Ni training range, which had served as the primary air- to-ground range for the Seventh Air Force. The air and ground range management of the Koon-Ni training range was transferred to the government of South Korea, which closed the range in August 2005. While there is an agreement with the government of South Korea to enable U.S. forces to train at other ranges, according to senior Air Force and U.S. Forces Korea officials, the other ranges do not provide electronic scoring capabilities necessary to meet the Air Force’s air-to-surface training requirements and there is difficulty in obtaining access to these ranges. In technical comments on a draft of this report, DOD officials said that the South Korean government has increased the U.S. Air Force’s access to air- to-ground training ranges and improved one training site. DOD also noted that newly agreed upon airspace management practices are expected to facilitate more training opportunities for U.S. Air Force pilots in South Korea. However, U.S. Pacific Command did not discuss the progress made in addressing these training challenges in its fiscal year 2009 overseas master plan. Though it omits the training challenges and progress in South Korea, the U.S. Pacific Command plan provides details on the training limitations in Japan. The plan discussed training limitations on carrier landing practice and the need for aircraft from Naval Air Facility Atsugi to train at Iwo Jima, Japan, which is considered a hardship due to the extra distance the aircraft need to fly to Iwo Jima. Currently, the United States and government of Japan are reviewing options that would provide the Naval Air Facility Atsugi access to closer training ranges. The plan also discusses how noise and land use sensitivities and maneuver area limitations in Okinawa require U.S. forces to deploy to other Pacific locations to supplement their training. It also describes efforts by U.S. Forces Japan and the government of Japan to engage in bilateral discussions to address these training shortfalls and explore solutions. DOD has recently submitted the overseas master plans to Congress several months after the annual budget submissions even though the Senate and conference reports accompanying the fiscal year 2004 military construction appropriation bill directed DOD to provide updates of the master plans with each yearly military construction budget submission. Recently, the Senate report accompanying the fiscal year 2009 military construction appropriations bill expressed concern about DOD’s frequent failure to comply with deadlines for submitting congressionally mandated reports. According to the Senate report, many of these mandated reports are planning documents, intended to demonstrate that DOD is adequately coordinating its many ongoing initiatives, such as the Global Defense Posture moves and the Grow the Force initiative. The Senate report further noted that these mandated reports are necessary to ensure proper congressional oversight and to inform congressional decisions related to DOD’s budget requests. Congressional staff members have stressed to us the importance of DOD providing the defense committees the overseas master plans at the same time as the annual budget submission. The President generally submits the administration’s budget submissions in February of each year. However, DOD provided the defense committees the fiscal year 2007 plans on April 27 and the fiscal year 2008 plans on March 28. This year, DOD submitted the plans to Congress in mid-May, 3 months after the fiscal 2009 military construction budget submission was provided to Congress. According to DOD officials, OSD’s most recent efforts to incorporate last-minute changes in basing plans and projects contributed to providing Congress the plans months after the military construction budget submission. In addition, overseas command officials commented that the lengthy review and approval process among the commands and OSD has contributed to the plans’ lateness. In comments on a draft of this report, DOD said that it intends to replace the overseas master plans with annual updates of its global defense posture as the department’s overseas planning report to Congress. Because of continued concern over the possibility of changes to the global defense posture, the Senate report accompanying the fiscal year 2009 military construction appropriation bill extended the requirement for DOD to provide annually updated reports on the status of its global basing initiative to the Committees on Appropriations of both Houses of Congress. These global basing reports are to be submitted with the administration’s budget submissions each year through fiscal year 2014 and should include, at a minimum, an overview of the current overseas basing strategy and an explanation of any changes to the strategy; the status of host nation negotiations; the cost to date of implementing the military construction elements of the strategy; an updated estimate of the cost to complete the construction program; and an updated timeline for implementing the strategy. The Senate report further noted that the timely filing of these reports is essential to the ability of the committee to exercise its oversight responsibilities, and it is therefore important that DOD adhere to the schedule and provide these reports at the same time as the annual budget submission. Since the department will continue to report on its overseas planning to Congress, DOD has an opportunity to reexamine its timeline for producing these reports and provide them to Congress with the administration’s annual budget submission to provide Congress with adequate time for review. Without access to these reports on a timely basis, congressional committees may not have the information needed at the appropriate time to prepare the annual defense and military construction legislation and to carry out oversight responsibilities of DOD’s global realignment of U.S. forces and installations overseas. DOD has established various planning and implementation documents that serve as a framework to guide the military realignment and buildup on Guam. However, the department has not issued a comprehensive master plan for the buildup that was initially due in December 2006, which Congress later extended to September 2008. While the Joint Guam Program Office is coordinating the development of a working-level plan for DOD that is to be submitted to Congress by the 2008 deadline, this is a onetime requirement, and DOD officials said that this plan will be a snapshot of the status of the planning process at the time of its completion and will not be considered a comprehensive master plan for several reasons. First, the results of the environmental impact statement and resulting record of decision on the proposed military buildup, which are expected to be completed by January 2010, will influence many key decisions about the military infrastructure development on Guam. Also, Joint Guam Program Office officials estimate that the office could complete a comprehensive master plan for Guam within 90 days once these documents are completed. Second, plans for the detailed force composition of units relocating to Guam, associated facility requirements, and implications for other services’ realignments on Guam continue to be refined. Third, additional time is needed to fully address the challenges related to funding uncertainties, operational requirements, and Guam’s economic and infrastructure requirements. DOD has established various planning and implementation documents that serve as a framework to guide the military realignment and buildup on Guam. Originally, the Marine Corps realignment was discussed in the U.S.- Japan Defense Policy Review Initiative, which established the framework for the future of the U.S. force structure in Japan. The Japan Ministry of Defense reported that based on bilateral meetings in 2005 and 2006, the government of Japan had decided to support the United States in its development of necessary facilities and infrastructure, including headquarters buildings, barracks, and family housing, to hasten the process of moving Marine Corps forces from Okinawa to Guam. In July 2006, U.S. Pacific Command developed the Guam Integrated Military Development Plan to provide an overview of the projected military population and infrastructure requirements. The plan is based on a notional force structure that was used to generate land and facility requirements for basing, operations, logistics, training, and quality of life involving the Marine Corps, Army, Air Force, Navy, and Special Operations Forces in Guam. However, this plan is not considered a master plan for the military buildup and provides limited information on the expected effects of the military buildup on the local community and off-base infrastructure. The Joint Guam Program Office has completed its first phase of the Guam planning process and developed basic facility requirements with general cost estimates, mapping concepts, and land use plans with preferred alternatives. Through an analysis of available land on the island and DOD preliminary operational requirements, the joint office has identified alternative sites for the Marine Corps main encampment, family housing, and aviation operations and training and for the Navy transient aircraft carrier pier. However, the office has not identified its preferred sites for the ballistic missile defense task force and firing and nonfiring training ranges. According to Joint Guam Program Office officials, the second phase of planning is in progress and will include more details, including more specific information on the placement of buildings, roads, training facilities, and utilities systems. The Joint Guam Program Office is coordinating the multi-service development of a working-level plan for DOD that is expected to be submitted to congressional staff in September 2008. However, this is a onetime requirement, and DOD officials said that this working-level plan will not be considered a final comprehensive master plan. According to Joint Guam Program Office officials, the working-level plan will be a snapshot of the status of the planning process at the time of its completion. It is being developed to provide DOD components with an opportunity to review and provide input. Moreover, the plan will address the realignment of Marine Corps forces in the context of other DOD- proposed actions on Guam, including the Navy’s plan to enhance its infrastructure, logistics capabilities, and waterfront facilities and the Army’s plan to place a ballistic missile defense task force on Guam. Before the Joint Guam Program Office can finalize its Guam master plan and finalize key decisions, it will need to complete the environmental impact statement and the resulting record of decision required by the National Environmental Policy Act of 1969. DOD officials said that the results of these documents will influence many key decisions on the exact location, size, and makeup of the military infrastructure development on Guam. However, according to these officials, the environmental impact statement and record of decision are not expected to be completed until December 2009 and January 2010, respectively. Joint Guam Program Office officials stated that development of a comprehensive master plan for the military buildup on Guam depended on the completion date of the record of decision and estimated that the office could complete a master plan within 90 days once the record of decision is finalized. On March 7, 2007, the Navy issued a public notice of intent to prepare an environmental impact statement pursuant to the requirements of the National Environmental Policy Act of 1969, as implemented by the Council on Environmental Quality Regulations, and Executive Order 12114. The notice of intent in the Federal Register states that the environmental impact statement will: Examine the potential environmental effects associated with relocating Marine Corps command, air, ground, and logistics units (which comprise approximately 8,000 Marines and their estimated 9,000 dependents) from Okinawa to Guam. The environmental impact statement will examine potential effects from activities associated with Marine Corps units’ relocation including operations, training, and infrastructure changes. Examine the Navy’s plan to enhance the infrastructure, logistic capabilities, and pier/waterfront facilities to support transient nuclear aircraft carrier berthing at Naval Base Guam. The environmental impact statement will examine potential effects of the waterfront improvements associated with the proposed transient berthing. Evaluate placing a ballistic missile defense task force (approximately 630 soldiers and their estimated 950 dependents) in Guam. The environmental impact statement will examine potential effects from activities associated with the task force, including operations, training, and needed infrastructure changes. Under the National Environmental Policy Act of 1969 and the regulations established by the Council on Environmental Quality, an environmental impact statement must include a purpose and need statement, a description of all reasonable project alternatives and their environmental effects (including a “no action” alternative), a description of the environment of the area to be affected or created by the alternatives being considered, and an analysis of the environmental impacts of the proposed action and each alternative. Further, accurate scientific analysis, expert agency comments, and public scrutiny are essential to implementing the National Environmental Policy Act of 1969. For example, federal agencies such as DOD are required to ensure the professional integrity, including scientific integrity, of the discussions and analyses contained in the environmental impact statement. Additionally, after preparing a draft environmental impact statement, federal agencies such as DOD are required to obtain the comments of any federal agency that has jurisdiction by law or certain special expertise and request the comments of appropriate state and local agencies, Native American tribes, and any agency that has requested that it receive such statements. Following the final environmental impact statement, DOD will prepare a record of decision that will state what the decision is for the proposed military buildup on Guam; identify alternatives considered and specify those that are environmentally preferable; state whether all practicable mitigation measures were adopted, and if not, explain why; and commit to a monitoring and enforcement program to ensure implementation of mitigation measures. Until an agency issues a final environmental impact statement and record of decision, it generally may not take any action concerning the proposal that would either have adverse environmental effects or limit the choice of reasonable alternatives. DOD officials stated that performing these alternative site analyses and cumulative effects analyses may delay the completion of a comprehensive master plan and affect the construction schedule of the required military facilities and infrastructure. DOD will submit its fiscal year 2010 budget request to Congress for the first phase of military construction projects prior to the completion of the environmental impact statement. Thus, DOD may be asking Congress to fund the military construction projects without the benefit of a completed environmental impact statement or a final decision on the full extent of its facility and funding requirements. DOD officials said that this practice is consistent with the department’s normal planning, programming, and budgeting procedures routinely used for large-scale construction projects. In such cases, construction projects are not awarded and funds are not expended until after the record of decision is completed. Joint Guam Program Office officials told us that immediately after the environmental impact statement and record of decision are completed, the department will commence construction of facilities in efforts to meet the 2014 goal of moving Marines and their dependents from Okinawa to Guam. However, some DOD and government of Guam officials believe that this is an ambitious schedule considering the possibility that the environmental impact statement could be delayed, the complexities of moving thousands of Marines and dependents from Okinawa to Guam, and the need to obtain funding from the United States and Japan to support military construction projects. Although the U.S.-Japan Defense Policy Review Initiative identifies Marine Corps units to move to Guam, plans for the detailed force composition of units relocating to Guam, associated facility requirements, and implications for other services’ realignments on Guam continue to be refined. The U.S.-Japan realignment roadmap states that approximately 8,000 Marines and their dependents will relocate to Guam. These units include the Third Marine Expeditionary Force’s command element and its major subordinate command headquarters: the Third Marine Division Headquarters, Third Marine Logistics Group Headquarters, 1st Marine Air Wing Headquarters, and 12th Marine Regiment Headquarters. The Marine Corps forces remaining on Okinawa will consist of Marine Air-Ground Task Force elements. Marine Corps officials said that the Corps was reviewing its Pacific force posture and associated requirements for training operations on Guam in light of DOD’s plan to increase the number of Marines under the Grow the Force initiative. At this time, no decisions have been made on whether to deploy additional forces to Guam under this initiative. If such a decision is made, the government of Japan would have no commitment to support such additional forces on Guam. The type of missions to be supported from Guam is a key factor in the planning for infrastructure capabilities. The operational, housing, utilities, and installation support facilities needed on Guam depend on the type, size, frequency, and number of units; units may be permanent, rotational, or transient. Desired capabilities and force structure define the training and facility requirements, such as the number and size of airfield facilities, ranges, family housing units, barracks, and schools and the capacity of the installation support facilities needed to support operations and the military population. Accordingly, Joint Guam Program Office officials said that the master plan they were initiating will reflect efforts to build “flexible” infrastructure, such as site preparation and utilities, that may operate on Guam. DOD faces several significant challenges associated with the military buildup on Guam, including addressing funding and operational challenges and community and infrastructure impacts, which could affect the development and implementation of its planning efforts. First, DOD has not identified all funding requirements and may encounter difficulties in obtaining funding given competing priorities within the department. Second, DOD officials need to address the operational and training limitations on Guam, such as for sealift and airlift capabilities, and training requirements for thousands of Marines. Third, the increase in military personnel and their dependents on Guam and the large number of construction workers needed to build the required military facilities will create challenges for Guam’s community and civilian infrastructure. DOD officials have yet to fully identify the funding requirements to support the military buildup on Guam. The military services’ realignments on Guam are estimated to cost over $13 billion; of that, the Marine Corps buildup is estimated to cost $10.3 billion. Additionally, the $13 billion estimate excludes the costs of all other defense organizations that will be needed to support the additional military personnel and dependents on Guam. For example, DOD agencies, including the Defense Logistics Agency and the Defense Commissary Agency, will likely incur additional costs to execute their missions to help support the services’ influx of personnel, missions, and equipment to Guam. Recently, Marine Forces Pacific officials estimated that the Marine Corps realignment on Guam alone will exceed $15 billion, which is significantly higher than the original $10.3 billion estimate. These additional operational costs include the cost of high-speed vessels (procurement and maintenance) to move Marines to and from Guam; training-related costs in the Commonwealth of the Northern Mariana Islands; relocation costs for personnel, equipment, and material to Guam; costs of facility furnishings, such as furniture and office equipment; and real estate costs if additional land is required in Guam or the Commonwealth of the Northern Mariana Islands. These officials have also identified base operational and maintenance costs that will be funded with U.S. appropriations after the move to Guam but are currently reimbursed by the government of Japan through its host nation funding programs like the Japan Facility Improvement Program and special measures agreements that provide support for labor and utility services for Marine Corps bases in Okinawa. In addition, cost estimates for the relocation of forces to Guam do not include all costs associated with the development of several training ranges for the Marine Corps in Guam and the Northern Mariana Islands— estimated to cost $2 billion. Also, the Marine Corps estimates that the strategic lift operating from Guam will cost an additional $88 million annually as compared with operations from Okinawa. Some uncertainties also exist in the cost-sharing arrangement with the government of Japan. The government of Japan is expected to contribute a total of $6.09 billion of which up to $2.8 billion would be in funds without reimbursement for the construction of operational and support infrastructure, such as barracks and office buildings. The government of Japan is also expected to provide the remainder, another $3.3 billion, in loans and equity investments for installation support infrastructure, such as on-base power and water systems, and military family housing. Most of this $3.3 billion is planned to be recouped over time by the government of Japan in the form of service charges paid by the Marine Corps and in rents paid by U.S. servicemembers with their overseas housing allowances provided by DOD using funds appropriated by Congress. Also, according to DOD officials, several conditions must be met before the government of Japan contributes some or all of the $6.09 billion to the cost of the Marine Corps move. First, the government of Japan has stipulated that its funds will not be made available until it has reviewed and agreed to specific infrastructure plans for Guam. Second, failure or delay of any initiative outlined in the U.S.-Japan Defense Policy Review Initiative may affect another, because various planning variables need to fall into place in order for the initiatives to move forward. For example, according to DOD, the commencement of facility construction on Guam in fiscal year 2010 depends on the government of Japan showing progress in the construction of the Marine Corps Air Station Futenma replacement facility. Finally, the government of Japan may encounter challenges in funding its share of the Marine Corps move considering Japan’s other national priorities and its commitments associated with funding several other major realignments of U.S. forces in Japan under the U.S.-Japan Defense Policy Review Initiative. DOD also has not fully addressed operational challenges, such as providing appropriate mobility support and training capabilities to meet Marine Corps requirements. According to Marine Forces Pacific officials, the Marine Corps in Guam will depend on strategic military sealift and airlift to reach destinations in Asia that will be farther away than was the case when the units were based in Okinawa. For example, in a contingency operation that requires sealift, the ships may have to deploy from Sasebo, Japan, or other locations to collect the Marines and their equipment on Guam and then go to the area where the contingency is taking place, potentially risking a delayed arrival at certain potential trouble spots since Guam is farther away from these locations than Okinawa. According to Marine Corps officials, amphibious shipping capability and airlift capacity are needed in Guam, which may include expanding existing staging facilities and systems support for both sealift and airlift. Existing training ranges and facilities on Guam are not sufficient to meet the training requirements of the projected Marine Corps force. A DOD analysis of training opportunities in Guam concluded that no ranges on Guam are suitable for the needs of the projected Marine Corps force because of inadequacy in size or lack of availability. U.S. Pacific Command is also in the process of conducting a training study that covers Guam and the Northern Mariana Islands to see what options are available for training in the region. Marine Forces Pacific officials stated that live-fire artillery training, amphibious landings, and tracked vehicle operations will be challenging because of the combination of factors associated with the limited size of training areas available and the environmental concerns on the Northern Mariana Islands. The increase in military presence is expected to have a significant impact on Guam’s community and public infrastructure; however, these potential effects have yet to be fully addressed. This undertaking is estimated to increase the current Guam population of approximately 171,000 by an estimated 25,000 active duty military personnel and dependents (or 14.6 percent) to 196,000. The Guam population could also swell further because DOD’s personnel estimates do not include defense civilians and contractors who are also likely to move to Guam to support DOD operations. DOD officials estimate that they will require 500 defense civilians and contractors to support the Marine Corps base operations; however, they expect many of these jobs to be filled by military spouses or the local population. This estimate does not include personnel for other service realignments on Guam. DOD and government of Guam officials recognize that the increase in construction due to the military buildup will exceed local capacity and the availability of local workers. For example, DOD officials cite a July 2008 study that estimated the annual construction capacity to be approximately $1 billion to $1.5 billion and potentially $2.5 billion with improvements to the port and road networks compared with the estimated construction capacity of more than $3 billion per year needed by DOD to meet the planned fiscal year 2014 completion date. In addition, Guam currently faces a shortage of skilled construction workers. Preliminary analysis indicates that 15,000 to 20,000 construction workers will be required to support the projected development on Guam. One estimate is that Guam may be able to meet only 10 to 15 percent of the labor requirement locally. Nearby countries may have workers willing to come to Guam to take jobs to construct needed facilities, but these workers will have to enter the United States on temporary nonagricultural workers visas. Joint Guam Program Office officials cite the recently passed legislation that will increase the cap of temporary workers in Guam from 2009 until 2014 as addressing many of their concerns about temporary workers’ visas. At the same time, the government of Guam reports that the influx of foreign workers would put a strain on local emergency care services, medical facilities, public utilities, transportation networks, and the availability of temporary housing. In addition, as we recently testified, DOD and government of Guam officials recognize that the island’s infrastructure is inadequate to meet the increased demand due to the military buildup. For example: Guam’s commercial port has capacity constraints with pier berthing space, crane operations, and container storage locations. Guam’s two major highways are in poor condition and, when ordnance (ammunition and explosives) is unloaded from ships for Andersen Air Force Base now and for the Marine Corps in the future, the ordnance must be transported on one of these major roads that run through highly populated areas. Guam’s electrical system—the sole power provider on the island—is not reliable and has transmission problems resulting in brownouts and voltage and frequency fluctuations. The system may not be adequate to deliver the additional energy requirements associated with the military buildup. Guam’s water and wastewater treatment systems are near capacity and have a history of failure due to aged and deteriorated distribution lines. The military buildup may increase demand by at least 25 percent. Guam’s solid waste facilities face capacity and environmental challenges as they have reached the end of their useful life. Currently, the solid waste landfills in Guam have a number of unresolved issues related to discharge of pollutants and are near capacity. Government of Guam officials stated that Guam will require significant funding to address anticipated public infrastructure challenges; however, these officials have not identified sufficient resources necessary to support this buildup. In a recent congressional hearing, the Governor of Guam testified that the government of Guam will need $6.1 billion to address infrastructure upgrades, such as projects regarding the port expansion, road enhancements, power and water upgrades, education, and public health improvements. These costs are separate from and in addition to DOD’s cost estimates of the military realignments on Guam. The evolution of U.S. overseas defense basing strategies and infrastructure requirements continues, as reflected in the fiscal year 2009 overseas master plans, and many efforts to consolidate, realign, and shift the U.S. military presence globally are still under way and are years from completion. For the last 4 years, the overseas master plans have been an important means for keeping Congress informed of the challenges DOD faces and the costs associated with such efforts. However, DOD has submitted the plans to the congressional defense committees months after the annual budget submissions even though the congressional reporting requirement directs that updates of the plans be provided with each yearly budget submission. Recently, a congressional committee report expressed concern about the department’s frequent failure to comply with deadlines for submitting mandated reports and reiterated the importance of receiving the reports in a timely manner. The timely filing of the department’s mandated reports was seen as essential to supporting the committee’s need for current information when making decisions related to DOD’s budget requests and to permit the committee to effectively exercise its oversight responsibilities. Without having the mandated reports in a timely manner, Congress is likely to be missing up-to-date information needed for making funding decisions and carrying out its oversight responsibilities. Since DOD intends to replace the overseas master plans with annual updates of its global defense posture as DOD’s overseas planning report to Congress, the department has an opportunity to reexamine its timeline for producing these reports to issue them with the administration’s annual budget submission to provide Congress with adequate time for review. With respect to the military buildup on Guam, it is likely that it will be 2010 or later before DOD is able to complete a comprehensive master plan for the military buildup. A comprehensive master plan is important for Congress, as it helps to ensure that Congress has a complete picture of facility requirements and associated costs in order to make appropriate funding decisions and to assist DOD, federal departments and agencies, the government of Guam, and other organizations in addressing the challenges associated with the military buildup. At the same time, it is reasonable to expect that until DOD has the results of the environmental impact statement and record of decision required by the National Environmental Policy Act of 1969, it will not be able to finalize a comprehensive master plan for the reasons that we stated in our report. Meanwhile, the Joint Guam Program Office is coordinating the multi- service development of a working-level plan for DOD that is to be submitted to Congress in September 2008. However, no requirement exists to report periodically on the status of DOD’s planning efforts after this date. In our 2007 report, we suggested that Congress consider requiring the Secretary of Defense to report periodically to the defense committees on the status of the department’s planning efforts for Guam to help ensure the best application of federal funds and leveraging of options for supporting the military buildup until DOD finalizes a comprehensive master plan. Because of the uncertainty in DOD’s plans for the military buildup, we continue to believe that this approach has merit and that the defense committees would find annual updates of the Joint Guam Program Office’s working-level plan for Guam useful to inform congressional decisions and ensure proper congressional oversight from September 2008 to the date on which the office completes its comprehensive master plan, currently expected no sooner than 2010. To inform congressional decisions and ensure proper congressional oversight, we recommend that the Secretary of Defense take the following two actions: Direct the Under Secretary of Defense for Acquisition, Technology and Logistics to initiate a process of developing global defense posture updates earlier each year so that DOD can provide the congressional defense committees the overseas planning report with the administration’s annual budget submission. Direct the Executive Director of the Joint Guam Program Office to provide the congressional defense committees with annual updates of the Guam working-level plan until a comprehensive master plan is finalized and submitted to Congress. In written comments on a draft of this report, DOD partially agreed with our recommendation to initiate a process of developing future overseas master plans earlier each year so that DOD can provide them to the congressional defense committees with the administration’s annual budget submission, and agreed with our recommendation to provide the congressional defense committees with annual updates of the Guam working-level plan until a comprehensive master plan is finalized and submitted to Congress. While DOD partially agreed with the first recommendation, it also stated that it plans to replace the expired requirements for the overseas master plans with annual updates of its global defense posture as DOD’s overseas planning report to Congress. DOD further commented that the report development process will support submission with the administration’s annual budget request. Since the Senate report accompanying the fiscal year 2009 military construction appropriation bill requires that these updates include data similar to those presented in prior master plans and explains that the timely filing of mandated reports is essential to the ability of the committee to exercise its oversight responsibilities, we believe that this effort to replace the overseas master plans with the global defense posture updates will meet the intent of our original recommendation. Therefore, we revised our recommendation to reflect that DOD plans to replace the master plans with annual updates of its global defense posture as the department’s overseas planning report to Congress. DOD’s comments are reprinted in appendix II. DOD also provided technical comments, which we have incorporated into the report as appropriate. We are sending copies of this report to the Secretary of Defense, the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Commander, U.S. Pacific Command; the Commander, U.S. European Command; the Commander, U.S. Central Command, and the Director, Office of Management and Budget. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO staff members who made key contributions to this report are listed in appendix III. To determine the extent to which the fiscal year 2009 overseas master plans have addressed changes since the last plans, the Department of Defense’s (DOD) challenges, and our prior recommendations, and to examine their timeliness, we analyzed the overseas master plans and compared them to the reporting requirements in the congressional mandate and the Office of the Secretary of Defense’s (OSD) guidance. We compared and contrasted the fiscal years 2008 and 2009 overseas master plans in order to identify improvements and updated challenges in the plans. We also assessed the quantity and quality of the data describing the base categories, host nation funding levels, facility requirements and costs, environmental remediation issues, and other issues affecting the implementation of the plans. To discuss the reporting requirements, host nation agreements and funding levels, U.S. funding levels and sources, environmental remediation and restoration issues, property returns, residual value, and training requirements, we met with officials from OSD; U.S. Pacific Command; U.S. Army Pacific; U.S. Pacific Fleet; U.S. Marine Forces Pacific; U.S. Pacific Air Forces; U.S. Forces Korea; U.S. Eighth Army; Seventh Air Force; U.S. Naval Forces Korea; U.S. Army Corps of Engineers, Far East District; U.S. Forces Japan; U.S. Army Japan; U.S. Air Forces Japan; U.S. Naval Forces Japan; U.S. Marine Forces Japan; Naval Facilities Engineering Command Far East, Japan; U.S. European Command; U.S. Army Europe; U.S. Naval Forces Europe; U.S. Air Force Europe; U.S. Central Command; and U.S. Special Operations Command. We also analyzed available reports, documents, policies, directives, international agreements, and guidance to keep abreast of ongoing changes in overseas defense basing strategies and requirements. To directly observe the condition of facilities and the status of selected construction projects, we visited and toured facilities at Garrison Wiesbaden and Garrison Grafenwoehr, Germany; Camp Schwab, Camp Zama, Yokosuka Naval Base, and Yokota Air Base, Japan; and Yongsan Army Garrison and Garrison Humphreys, South Korea. To determine the status of DOD’s planning efforts for the Guam military buildup, we met with officials from OSD, the Air Force, the Navy, U.S. Pacific Command, and the Joint Guam Program Office. In general, we discussed the current planning framework for the military buildup, the schedule and development of a comprehensive master plan, and the status of the environmental impact study required by the National Environmental Policy Act of 1969. In addition, we met with officials from U.S. Pacific Fleet; U.S. Marine Corps Forces, Pacific; U.S. Marine Forces Japan; Third Marine Expeditionary Forces; U.S. Forces Japan; U.S. Army Pacific; and Pacific Air Forces to discuss the challenges and various factors that can affect U.S. infrastructure requirements and costs associated with the military buildup, to determine if funding requirements to accommodate the buildup have been identified, and to identify operational and training challenges associated with the buildup. We also visited Naval Base Guam; Andersen Air Force Base, Guam; and other military sites in Guam to directly observe the installations and future military construction sites. We analyzed available reports, documents, and international agreements to keep abreast of ongoing activities in Guam pertaining to challenges that may affect DOD’s development and implementation of a comprehensive master plan for the military buildup. To identify the funding and local infrastructure challenges, we met with the Governor and his staff, Guam Delegate to the U.S. House of Representatives, and representatives from the Guam legislature, the Mayors’ Council of Guam, the Guam Chamber of Commerce, Guam’s Civilian Military Task Force, and community groups on Guam. We met with U.S. Special Operations Command officials; however, its planning efforts were not specifically required for the overseas master plans in response to the congressional mandates. In addition, we did not include U.S. Southern Command and U.S. Northern Command in our analysis because these commands have significantly fewer facilities overseas than the other regional commands in the Pacific, Europe, and Central Asia. We conducted this performance audit from September 2007 through August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Mark Little, Assistant Director; Nelsie Alcoser; Mae Jones; Kate Lenane; Julia Matta; and Jamilah Moon made major contributions to this report. Force Structure: Preliminary Observations on the Progress and Challenges Associated with Establishing the U.S. Africa Command. GAO-08-947T. Washington, D.C.: July 15, 2008. Defense Infrastructure: Overseas Master Plans Are Improving, but DOD Needs to Provide Congress Additional Information about the Military Buildup on Guam. GAO-07-1015. Washington, D.C.: September 12, 2007. Defense Management: Comprehensive Strategy and Annual Reporting Are Needed to Measure Progress and Costs of DOD’s Global Posture Restructuring. GAO-06-852. Washington, D.C.: September 13, 2006. DOD’s Overseas Infrastructure Master Plans Continue to Evolve. GAO- 06-913R. Washington, D.C.: August 22, 2006. Opportunities Exist to Improve Comprehensive Master Plans for Changing U.S. Defense Infrastructure Overseas. GAO-05-680R. Washington, D.C.: June 27, 2005. Defense Infrastructure: Factors Affecting U.S. Infrastructure Costs Overseas and the Development of Comprehensive Master Plans. GAO-04- 609NI. Washington, D.C.: July 15, 2004. Overseas Presence: Issues Involved in Reducing the Impact of the U.S. Military Presence on Okinawa. GAO/NSIAD-98-66. Washington, D.C.: March 2, 1998. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO- 08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Defense Logistics: Navy Needs to Develop and Implement a Plan to Ensure That Voyage Repairs Are Available to Ships Operating near Guam when Needed. GAO-08-427. Washington, D.C.: May 12, 2008. Defense Infrastructure: Planning Efforts for the Proposed Military Buildup on Guam Are in Their Initial Stages, with Many Challenges Yet to Be Addressed. GAO-08-722T. Washington, D.C.: May 1, 2008. Commonwealth of the Northern Mariana Islands: Pending Legislation Would Apply U.S. Immigration Law to the CNMI with a Transition Period. GAO-08-466. Washington, D.C.: March 28, 2008. U.S. Insular Areas: Economic, Fiscal, and Financial Accountability Challenges. GAO-07-119. Washington, D.C.: December 12, 2006. U.S. Insular Areas: Multiple Factors Affect Federal Health Care Funding. GAO-06-75. Washington, D.C.: October 14, 2005. Environmental Cleanup: Better Communication Needed for Dealing with Formerly Used Defense Sites in Guam. GAO-02-423. Washington, D.C.: April 11, 2002. Compact of Free Association: Negotiations Should Address Aid Effectiveness and Accountability and Migrants’ Impact on U.S. Areas. GAO-02-270T. Washington, D.C.: December 6, 2001. Foreign Relations: Migration From Micronesian Nations Has Had Significant Impact on Guam, Hawaii, and the Commonwealth of the Northern Mariana Islands. GAO-02-40. Washington, D.C.: October 5, 2001. U.S. Insular Areas: Application of the U.S. Constitution. GAO/OGC-98-5. Washington, D.C.: November 7, 1997. Insular Areas Update. GAO/GGD-96-184R. Washington, D.C.: September 13, 1996. U.S. Insular Areas: Information on Fiscal Relations with the Federal Government. GAO/T-GGD-95-71. Washington, D.C.: January 31, 1995. U.S. Insular Areas: Development Strategy and Better Coordination Among U.S. Agencies Are Needed. GAO/NSIAD-94-62. Washington, D.C.: February 7, 1994.
The Department of Defense (DOD) continues its efforts to reduce the number of troops permanently stationed overseas and consolidate overseas bases. The Senate and conference reports accompanying the fiscal year 2004 military construction appropriation bill directed DOD to develop and GAO to monitor DOD's overseas master plans and to provide annual assessments. The Senate report accompanying the fiscal year 2007 military construction appropriation bill directed GAO to review DOD's master planning effort for Guam as part of these annual reviews. This report examines (1) the changes and challenges described in the fiscal year 2009 master plans, the extent the plans address GAO's prior recommendations, and the plans' timeliness and (2) the status of DOD's master planning efforts for the proposed buildup of military forces and infrastructure on Guam. GAO reviewed the plans and other relevant documents, and visited three overseas combatant commands, various installations, and Guam organizations. While the fiscal year 2009 master plans generally reflect recent changes in U.S. overseas basing strategies and the challenges DOD faces as well as address GAO's prior recommendations, DOD provided Congress the plans in May 2008, well after the February budget submission when the Senate and conference reports require DOD to issue the plans. This year's plans contain information on current overseas basing strategies and infrastructure requirements and the challenges that DOD faces implementing the plans. The plans also generally address GAO's recommendations except that the U.S. Pacific Command plan does not provide an update of the Air Force's training challenges in South Korea, despite GAO's prior recommendation that it should describe the challenges and their potential effects on infrastructure and funding requirements. DOD officials said that since last year the South Korean government and the U.S. Air Force have taken several steps to address these training challenges. According to DOD officials, efforts to incorporate last-minute changes in basing plans and projects and the lengthy review and approval process have contributed to the fiscal year 2009 plans' lateness. While the congressional requirement for the overseas master plans expired with the fiscal year 2009 plans, DOD said that it intends to provide Congress annual updates of its global defense posture through 2014 and that these updates would replace the master plans as DOD's overseas planning report to Congress. Since DOD will continue to provide annually updated global defense posture reports, it has an opportunity to reexamine its timeline for producing future reports earlier to provide Congress with time for review. DOD has developed a basic framework for the military buildup on Guam but has not issued the congressionally required master plan that was initially due in December 2006, and which Congress later extended to September 2008. The Joint Guam Program Office, which is planning and managing the proposed military buildup, is coordinating the multi-service development of a working-level plan for DOD that is to be submitted to Congress by the 2008 deadline. However, this is a onetime requirement, and DOD officials said that the plan will be a snapshot of the status of the planning process and will not be considered a comprehensive master plan for several reasons. First, while the required environmental impact statement and the resulting record of decision will influence many key decisions about the buildup of military forces and infrastructure on Guam, these documents are not expected to be completed until January 2010. Also, officials of the Joint Guam Program Office said that they expect to complete a comprehensive master plan within 90 days after these required documents are finalized. Second, plans for the detailed force composition of units relocating to Guam, associated facility requirements, and implications for other services' realignments on Guam continue to be refined. Third, additional time is needed to fully address the challenges related to funding uncertainties, operational requirements, and Guam's economic and infrastructure requirements. However, without a comprehensive master plan, Congress may have limited data on requirements on which to make informed appropriation decisions and to carry out its oversight responsibilities.
Because of the federal statistical system’s decentralized structure, the collection and issuance of statistical information depends on the effective performance of many separate statistical agencies and programs. The former Chairman of the Senate Committee on Governmental Affairs and the former Chairman of its Subcommittee on Regulation and Government Information asked us to (1) evaluate the performance of four prominent federal statistical agencies using guidelines developed by the National Academy of Sciences (NAS) and (2) provide information on the role of the Office of Management and Budget (OMB) to coordinate and oversee the statistical activities of the agencies that constitute the federal statistical system. The four agencies were the Bureau of the Census and the Bureau of Economic Analysis (BEA) within the Department of Commerce, the Bureau of Labor Statistics (BLS) within the Department of Labor, and the National Center for Health Statistics (NCHS) within the Department of Health and Human Services. The federal statistical system is not a system in the ordinary sense but rather a designation for the numerous government agencies that collect, process, analyze, and use quantitative data. Few federal agencies have data collection as their sole or primary mission, but OMB in its annual report identifies agencies as conducting statistical activities when they devote $500,000 or more of their annual budgets to such activities. If this criterion is used for definition, the agencies in the federal statistical system could change from year to year, although the list is quite stable over time. For fiscal year 1995, 72 agencies met or exceeded the $500,000 budget level. Although the majority of these agencies produce statistical information on a particular subject as a byproduct of their administrative, regulatory, or operating responsibilities, several agencies have the production of statistical information as their principal mission. Some federal statistics are used by persons with varying information needs; such statistics are frequently called general-purpose statistics. Other statistics are special purpose in character and deal with one subject matter (e.g., education or transportation); they focus on a particular function of government and are primarily designed to aid program administrators and policymakers. The bulk of these other statistics relate to specific federal programs and are essentially a byproduct of the agencies’ administration or monitoring of these activities. The four agencies whose conformance with the selected NAS guidelines we evaluated are major, well-recognized multipurpose agencies of the federal statistical system. Census tabulates and publishes a wide variety of data about the people and the economy of the nation. These data include the Decennial Census of Population and Housing, the economic and agricultural censuses, and data on U.S. merchandise trade. BLS collects, processes, analyzes, and disseminates data on employment, unemployment, characteristics of employment and employees, and prices and consumer expenditures. BEA is a research-oriented statistical agency that prepares, develops, interprets, and publishes the U.S. economic accounts. BEA integrates large volumes of monthly, quarterly, and annual economic data—ranging from construction spending to retail sales—produced by other government agencies and trade sources to produce a complete and consistent picture of the national economy and its international and regional dimensions. NCHS specializes in health statistics, including vital statistics from marriage, birth, and death certificates. It collects, analyzes, disseminates, and carries out research on the U.S. population’s health status, lifestyles, and exposure to unhealthful influences. Although Census, BLS, BEA, and NCHS are responsible for a large portion of the statistics produced by the federal government, they are only 4 of the 72 agencies that constitute the federal statistical system. For example, NCHS is not the only agency that collects health statistics. Within the Department of Health and Human Services, 13 agencies collect health statistics. The largest of these are the National Institutes of Health and NCHS’ parent organization, the Centers for Disease Control and Prevention. OMB’s Statistical Policy Branch is responsible for coordinating the activities of the 72 statistical agencies by reviewing agency budget requests, issuing statistical standards, facilitating interagency working groups, and reviewing agency information requests. Appendix I lists by department the names of the 72 agencies in the federal statistical system that are expected to spend at least $500,000 on statistical activities in fiscal year 1995. Since the earliest days of the United States, statistics have been collected and used to describe various facets of the national economy and population. The Constitution, notably, mandates a decennial census to count the population. Government policy and private decisions depend on the availability of accurate and timely information. In addition, federal, state, and local governments rely on statistical information to administer programs under their jurisdictions. Census, BEA, BLS, and NCHS are responsible for many of the statistics used by policymakers and those who administer federal programs. The statistical activities of these four agencies influence policymakers in their formulation of national policies. For example, statistics are fundamental to the federal government’s efforts to allocate its annual budget. Federal income tax brackets and some benefit payments, for instance, are adjusted to mitigate the effects of inflation. Statistics are also an important part of many presidential messages and reports. For example, the annual Economic Report of the President contains extensive statistical appendixes, and many of the policies and programs discussed in the report are based on a statistical foundation provided by the four agencies discussed in this report. In addition, Census’ Decennial Census of Population and Housing is the basis on which representation in Congress is apportioned among the states. The uses of federal statistics extend beyond the government. Decennial census data are used widely by businesses and the media to examine social trends. Much of the news on the business and financial pages of the daily press comes from the release of statistics by BEA, BLS, and Census. Business analysts regularly use statistics of economic conditions when planning investments and operations in their own businesses. Labor organizations and management use statistics on earnings, hours, employment, and prices in their collective bargaining negotiations. BLS’ consumer price index (CPI), which measures the change in the prices of a uniform “market basket” of goods and services, is widely used as the measure for “escalator clauses” in contracts. In employment contracts, for example, such a clause might tie increases in wages and pensions to the CPI to keep employee or retiree earnings in line with inflation. The administration and Congress use statistics produced by these four agencies as a basis for measuring the results of government programs. Some data series are built directly into the administration of programs such as BLS’ inflation and Census’ poverty indexes. For example, if the CPI overstated inflation by as little as 0.2 percentage points annually from 1995 through 1999, an estimated $19.1 billion would be added to the deficit over that 5-year period, according to OMB estimates. In addition, current defense industry contracts amounting to $90 billion include a purchases and sales component that is adjusted by BLS’ producer price index. And BEA, BLS, and Census produce local area unemployment, income, and poverty statistics that are important components of formula programs that allocate billions of dollars of federal funds to state and local governments. The statistics that NCHS produces and disseminates offer many indicators of the health of the nation’s population. From a public policy perspective, NCHS data are critical in the government’s monitoring of cost and delivery of health care. The use of these data in research also helps to bring about improvements in the prevention or treatment of diseases. Because data are usually published from each NCHS information system separately, the wide range of NCHS’ data is sometimes not apparent. NCHS’ data systems are used to obtain information from individuals, health care providers, and vital records, such as birth, death, and marriage certificates; the data systems are useful in studying public health. According to OMB, the 72 agencies that had budgets of $500,000 or more for statistical activities requested an estimated total of $2.6 billion in direct funding for statistical activities in fiscal year 1995. Many of these agencies also received reimbursements from other federal agencies, state and local governments, and the private sector to perform requested statistical activities. Of the requested funding for the 72 agencies combined, the 4 agencies’ share of direct funding was about $752 million (29.4 percent). Table 1.1 shows the share that each of the four agencies requested for statistical funding. The Committee on National Statistics (CNSTAT) of NAS developed guidelines that it believed were essential for the operation of federal agencies that conduct statistical activities. CNSTAT is composed of professionals in the statistical field who have no direct relationship with the federal government. Since its founding, CNSTAT has concentrated on reviewing federal statistics on a selective basis. It also prepares reports on special studies that are intended to improve the effectiveness of the federal statistical system. Considering the diversity of the agencies that make up the federal statistical system, it is difficult to devise standards against which to measure the agencies’ performance. However, CNSTAT developed guidelines that it believes are essential for the efficient operation of federal agencies that conduct statistical activities. NAS issued a CNSTAT report in 1992 entitled Principles and Practices for a Federal Statistical Agency. CNSTAT prepared this report partially in response to requests for advice from congressional and executive officials proposing the creation of new statistical agencies, such as a Bureau of Environmental Statistics and a Bureau of Transportation Statistics. These officials were interested in CNSTAT’s views on what constitutes an effective federal statistical agency. CNSTAT also prepared the report because it was concerned that federal statistical agencies might sometimes not meet what it considered acceptable professional standards. In the NAS report, CNSTAT outlined guidelines that it believes should be followed by federal statistical agencies. According to NAS, the guidelines contain principles and practices that are statements of “best practices,” rather than legal requirements or scientific rules. The guidelines, however, were intended to be consistent with current laws and statistical theory and practice. In the report, CNSTAT discussed the following three principles it found to be essential for the effective operation of a federal statistical agency. According to these principles, a federal statistical agency should • be in a position to provide information that is relevant to issues of public • have a relationship of mutual respect and trust with those who use its data • have a relationship of mutual respect and trust with respondents who provide data and with all data subjects from which it obtains information. In the report, CNSTAT also discussed the following 11 guidelines it found to be essential for the effective operation of a federal statistical agency. These guidelines are intended as specific applications of the three broad principles. According to these guidelines, a federal statistical agency needs • a clearly defined and well-accepted mission, • cooperation with data users by soliciting their views on data quality, • established procedures for the fair treatment of data providers, • openness about the data provided to users, • coordination with other statistical agencies, • a wide dissemination of data, • a strong measure of independence, • commitment to quality and professional standards, • an active research program, • professional advancement of staff, and • caution in conducting nonstatistical activities. We undertook this review at the request of the former Chairman of the Senate Committee on Governmental Affairs and the former Chairman of its former Subcommittee on Regulation and Government Information. To evaluate the four agencies’ performance, we compared their activities to the seven NAS guidelines for the effective operation of a federal statistical agency that we regarded as the most susceptible to objective assessment. We did not include the other four NAS guidelines that are of a more subjective nature. The original request for this review specified evaluating Census, BLS, and NCHS. With the agreement of the requesters, we added BEA to the review because of its key responsibilities for providing economic data. The requesters also asked us to provide information on OMB’s role in coordinating and overseeing the statistical activities of those agencies that constitute the federal statistical system. Our first objective was to determine to what extent the four statistical agencies followed the seven NAS guidelines that we used for comparison. Specifically, we examined whether the four agencies (1) had clearly defined and well-accepted missions, (2) cooperated with data users, (3) had established procedures for the fair treatment of data providers, (4) were open about the data provided to users, (5) widely disseminated the data, (6) coordinated with other statistical agencies, and (7) had a strong measure of independence. To understand the context for these guidelines, we interviewed CNSTAT officials to document the procedures they used in preparing and issuing the guidelines. We also interviewed executive branch officials and other knowledgeable experts about the NAS guidelines; reviewed relevant literature, such as other NAS publications and reports about the federal statistical system; and compared the NAS guidelines to comparable international guidelines for statistical agencies. To determine agency compliance with the selected NAS guidelines, we interviewed officials from each of the four agencies and OMB and asked them to provide documents to demonstrate their compliance. These documents included information on missions, activities, and resource history; legal basis for agency organization and operations; data dissemination; cooperation with data users; and coordination/contacts with other governmental organizations and professional societies. In general, our criterion for compliance with a guideline was whether agencies had such documentation. We relied upon interviews and other sources of data to ensure that we adequately understood the context of this documentation. The agencies also provided us with background briefing books, descriptions of statistical programs and publications, agency orders and operational procedures, budget documents, and other documentation. We attended meetings of selected agency advisory committees and boards, meetings with independent groups, and agency-sponsored user conferences. We also met with key agency officials to discuss their programs and policies in the context of the selected guidelines. For example, to determine if the agencies had clearly defined and well-accepted missions, we discussed with agency officials the process by which the mission statements were developed (i.e., through planning conferences or other means) and compared the mission statements to authorizing legislation and agency activities to carry out their statistical missions. Our second objective was to provide information on OMB’s role in coordinating and overseeing the statistical activities of those agencies that constitute the federal statistical system. To do so, we reviewed the requirements contained in the Paperwork Reduction Act for OMB’s responsibilities to coordinate the federal statistical system. We also reviewed published studies on organization and coordination of the federal statistical system. In agreement with the requesters, we predominantly focused on OMB’s role in coordinating federal statistical agencies’ budgets and did not address the other aspects of OMB’s role, such as assessing the quality of statistical data, statistical standards, and paperwork reduction. We reviewed OMB’s annual reports on statistical activities of the U.S. government and the four agencies’ budget submissions for fiscal years 1983 to 1995. We met with officials from OMB’s Statistical Policy Branch, which is responsible for coordinating the budgets and policies of the federal statistical system, to discuss the Branch’s budget coordination mission and the resources it has to carry out this mission. We did our work between June 1992 and February 1995 in Washington, D.C., in accordance with generally accepted government auditing standards. The Department of Commerce, BLS, NCHS, and OMB provided comments on a draft of this report. Commerce’s written comments incorporated comments from BEA and Census. All of Census’ and most of BEA’s comments were suggestions for technical clarifications and corrections, and we have incorporated these suggestions where appropriate. BEA said that our report underscores the efforts statistical agencies have made to operate effectively and to maintain user confidence in the data they produce. BEA also noted that it agrees in principle with the NAS guidelines and the way we applied them to the statistical agencies. BEA expressed its appreciation for our portrayal of how it handled the integrity issues involving previous GDP estimates. BEA also cited two issues that it believed needed to be clarified in the report. First, BEA thought that we portrayed the statistical agencies as passive participants in efforts to enhance data sharing among themselves. This was not our intention, and we have revised the report on page 27 to acknowledge an interagency task force that was formed to develop proposals for enhanced data sharing. The second issue raised by BEA involved our discussion of its efforts to get input from data users. BEA felt we should have mentioned its Mid-Decade Strategic Review and Plan, which is intended to maintain and review the performance of BEA’s economic accounts. According to BEA, this review includes seeking user input on how the accounts can be improved. We have revised the report on page 21 to include a discussion of the mid-decade review and plan. On June 7, 1995, we met with the Chief Statistician and a senior economist in OMB’s Office of Information and Regulatory Affairs. The officials generally agreed with our evaluation of the four agencies’ adherence to the selected NAS guidelines. However, the officials said that our report appeared to indicate that coordination among the statistical agencies is limited to their data-sharing arrangements. The officials noted that the agencies coordinate in many ways, including through working groups on statistical standards, survey design, and data collection. We did not intend to convey the impression that agency coordination is limited to data sharing, and we have revised the report on page 27 to clarify the extent of coordination among statistical agencies. The OMB officials also said that the draft did not adequately reflect the full extent of the coordination activities performed by OMB’s Statistical Policy Branch. We have revised the report on pages 43 to 45 to reflect the description of the Branch’s budget coordination function, which includes working with the major statistical agencies and the OMB program examiners assigned to them to coordinate the statistical budgets of these agencies. The officials also said that the draft did not adequately describe the Branch’s role in the coordination of federal statistical policy. We agree that the Branch plays an important role in the coordination of federal statistical policy, but our report focused on its budget coordination function. However, we have revised the report on pages 17, 18, and 43 to clarify that the Branch has other responsibilities in addition to budget coordination. The officials also offered suggestions for technical corrections and clarifications, which we have incorporated where appropriate. BLS and NCHS provided oral comments on the draft report. On June 5, 1995, we met at BLS with the Chief, Division of Management Functions and the Chief, Division of Financial Planning and Management. The officials made suggestions for technical corrections and clarifications, which we have incorporated. On June 6, we spoke with the Chief of NCHS’ Planning, Budget and Legislative staff, who also made suggestions for technical corrections and clarifications, and these have also been incorporated. The four agencies adhered to five of the seven selected guidelines with only minor exceptions. The agencies (1) had clearly defined and well-accepted mission statements, (2) cooperated with data users by soliciting their views on data quality, (3) treated data providers fairly, (4) openly described all aspects of their data to users, and (5) widely disseminated the data they produced. However, we found that the agencies did not or could not meet all aspects of the other two guidelines, which involved the agencies’ coordination with other statistical agencies and their measure of independence. First, although the agencies coordinated to some extent with other statistical agencies, their coordination was limited by data provider confidentiality statutes, and initiatives to modify the limitations through legislative change have not yet succeeded. Second, the agencies themselves were generally politically independent, but we have reported on one instance when a statistical agency—BEA—had not been successful in conveying this independence to data users, judging by allegations of political interference in their work. In a March 1993 report GAO noted that a collection of articles that appeared in the press from October 1991 through November 1992 alleged that BEA had manipulated its first quarter gross domestic product estimates for political purposes. The report concluded that the allegations were not substantiated and recommended actions to avoid such allegations in the future. Following our recommendation, BEA has formulated a strategy to counter misperceptions on the matter of its independence. “An agency’s mission should include responsibility for assessing needs for information and determining sources of data, measurement methods, and efficient methods of collection and ensuring the public availability of needed data, including, if necessary, the establishment of a data collection program.” Each agency provided us with statements that described the mission of the agency, the scope of its program, and its authority and responsibilities. In addition, officials from each agency described the process by which the mission statements were developed (e.g., through planning conferences). We found some mission statements contained in legislation; others were issued by the agencies administratively, which is permissible under the NAS guidelines for agencies that have only very general legislative authority. Implementing regulations and official publication releases also mentioned the missions of the four agencies. All of these agencies had mission statements that had been in effect for a number of years. “A statistics agency should consult with a broad spectrum of users of its data in order to make its products more useful. It should: —seek advice on data concepts, methods, and products in a variety of formal and informal ways, from data users as well as from professional and technical subject-matter experts. —seek advice from external groups on its statistical program as a whole, on setting statistical priorities, and on the statistical methodologies it uses. —endeavor to meet the needs for access to data while maintaining appropriate safeguards for the confidentiality of individual responses. —exercise care to make its data equally accessible to all potential users.” We found each of the four agencies had policies for requesting and receiving feedback from data users, including other statistical agencies, by a variety of means. The agencies also cooperated with data users by maintaining appropriate confidentiality safeguards of respondents and making data available to all potential users. Census, BLS, BEA, and NCHS have communicated with data users mainly through formal advisory committees of users and statistical data centers of individual state governments (such as the Census State Data Center network). The agencies have consulted these advisory committees and state government units on issues of users’ data needs, including the frequency of surveys, content, geographic level, and type of product. For example, in conducting its Mid-Decade Strategic Review and Plan, BEA publicly reviewed the status of its economic accounts and actively solicited wide user input—including organizing a well-attended user conference. In addition to consulting formal advisory groups, all four of the agencies have on occasion contracted with independent groups to receive advice on the agencies’ respective methodologies. These contacts also helped make data accessible to all potential users. For example, BLS contracted with the American Statistical Association to conduct an independent review of BLS’ downward revision of the March 1991 benchmark for the monthly payroll survey of employment estimates. Also, at the request of NCHS, NAS and the Institute of Medicine convened a panel of experts to evaluate NCHS’ plans for the National Health Care Survey. NAS also has convened two ongoing panels of experts, which were formed at congressional and agency request, to advise Census on the data requirements of the 2000 Decennial Census and on possible methodological approaches that Census should take to meet these requirements. Employees of all four agencies frequently participated in statistical conferences to exchange ideas with researchers and statisticians from other federal agencies, universities, and private sector organizations. In addition, agency employees take part in meetings with various organizations and professional associations, such as CNSTAT, the American Statistical Association, the American Economic Association, the National Association of Business Economists, and other organizations and associations that are relevant to their statistical activities and research. Census, BLS, and NCHS have regular conferences with cooperating state statistical agencies. On occasion, these three agencies also sponsor user conferences. For example, BLS sponsored user conferences in 1994 concerning the major redesign of the Current Population Survey and NCHS has biennial user conferences. In addition, other forms of contact with data users can include agencies’ conducting OMB-approved surveys on specific data measures. Comments from users are also sometimes solicited through a published Federal Register notice. As we noted in chapter 1, government agencies are extensive users of federal statistics, and the statistical agencies maintain contacts with these users and among themselves as well. For example, OMB chairs monthly meetings with executive branch statistical agency heads to help coordinate agencies’ statistical activities. “To maintain credibility and a relationship of respect and trust with data subjects and other data providers, an agency must observe fair information practices. Such practices include: — policies and procedures to maintain the confidentiality of individual responses. An agency avoids activities that might lead to a misperception that confidentiality assurances have been breached. — informing respondents of the conditions of participation in a data collection and the anticipated uses of the information. — minimizing the contribution of time and effort asked of respondents, consistent with the purposes of the data collection activity.” We found that all four agencies had laws, regulations, or policies in place to maintain the confidentiality of data providers. The confidentiality provisions of Census, BEA, and NCHS are statutorily based. BLS relies on a commissioner’s order, which is similar in language to a statutory confidentiality provision, to state its treatment of the confidential nature of BLS’ records. Census is subject by law to strict confidentiality provisions controlling data it collects. The Census Bureau cannot “make any publication whereby the data furnished by any particular establishment or individual under this title can be identified.” The law also provides penalties for inappropriate disclosure of information or for uses other than statistical purposes and restricts access to data to Census employees. Two statutes contain confidentiality provisions that apply specifically to BEA. The provision in one statute broadly pertains to “any statistical information furnished in confidence” to BEA and provides that the information “shall be held to be confidential, and shall be used only for the statistical purposes for which it was supplied.” The provision of the other statute—the International Investment and Trade Services Survey Act—covers BEA’s direct investment and international services surveys. The provision specifies that the individual company data collected under the act can be used only for analytical and statistical purposes, and it limits access to the data to officials and employees of government agencies that are specifically designated by the president to perform functions under the act. A 1990 amendment to the act permits BEA to share data with Census and BLS to obtain those agencies’ more detailed, establishment-level data for the foreign-owned U.S. enterprises that report to BEA. NCHS is bound by the Public Health Service Act, as amended. Under the act, no information NCHS obtains in the course of statistical activities may be used for any purpose other than that for which it was supplied, unless authorized under regulations of the Secretary of Health and Human Services. BLS relies on a commissioner’s order to state its treatment of the confidential nature of BLS’ records. The order provides specific detail on how data are to be safeguarded. BLS sought legislation in 1990 to codify certain confidentiality protection, but Congress did not act on the legislation. We did not evaluate the effectiveness of statutory provisions or regulations in maintaining the confidentiality of the four agencies’ data providers. However, in 1993 NAS issued a report that dealt with confidentiality issues.The report concluded that opportunities existed for federal agencies to improve data protection without diminishing data access. Specifically, the report noted that unless pledges of confidentiality are backed by legal authority, they provide an inadequate shield against unauthorized administrative uses. In addition, the four agencies provided us with documentation that shows how they inform respondents of the conditions of participation in agency data collection and the anticipated uses of the data. For example, the agencies print on their questionnaires a notice of the confidential treatment to be accorded the information provided by respondents. The four agencies also attempt to minimize the time and effort asked of respondents by following the processes established by OMB under the Paperwork Reduction Act. Under these processes, OMB must review and approve data collection questionnaires to ensure that the paperwork burden on the public is minimized. “An agency should fully describe its data and comment on their relevance to specific major uses. It should describe the methods used, the assumptions made, the limitations of data, the manners by which data linkages are made, and the results of research on the methods and data.” We found that all four agencies had documentation that established procedures for openness with data users in describing all aspects of the agencies’ data. (We did not verify agency compliance with these documented procedures.) Each agency makes a wide range of statistics and related information available to users and provides publications explaining the types of statistics it produces. Each agency also publishes analyses that include the relevance, methodology, assumptions, and results of the data. For example, monthly publications, such as BEA’s Survey of Current Business and BLS’ Monthly Labor Review, contain statistics and articles that describe how those statistics were compiled as well as the limitations of the data. Each of the four agencies provided us with documentation showing the procedures it is to follow for agency operations and data dissemination, including publication policies, types of data products, and publication and release schedules. “— Dissemination of data and information (basic series, analytic reports, press releases, public-use tapes) should be timely and public. Avenues of dissemination should be chosen to reach as broad a public as reasonably possible. — Release of information should not be subject to actual or perceived political interference. — An agency should have an established publications policy that describes, for a data collection program, the types of reports and other data releases to be made available, the audience to be served, and the frequency of release. — A policy for the preservation of data should guide what data to retain and how they are to be archived for secondary analysis.” In its guidelines, NAS included a series of steps that an agency should follow in releasing and preserving the data for which it is responsible. We found that the four agencies have policies in place for data dissemination and preservation that would meet this guideline. However, one aspect of this guideline indicates that the release of data should be free of political interference. As we discuss in the section on the NAS guideline for statistical agency independence, our previous work indicates that BEA has been subject to unfounded accusations that its data have been politically manipulated. The four agencies disseminate statistics and information on those statistics to the public. We found that all four generally choose methods of dissemination of information to reach a broad public audience. The processes and management of the distribution of statistical products (e.g., printed, microfiche, film, CD-ROM) are similar for each of the four agencies. All of the agencies have publications that describe the types of reports and other publications on statistical censuses and surveys that are available to the public. The purpose of these publications is also to introduce users to the data systems, to suggest research opportunities, and to indicate how and when data are made available. Each of these agencies has established orders and policies for the publishing, release, and distribution of statistics. Each agency requires all printed and electronic materials and speeches to be cleared by designated offices (e.g., the Office of Publications and Special Studies in BLS) before their release. The frequency of release of economic statistics for all federal statistical agencies is covered by an OMB directive. The processes and management regarding policies on archival preservation and records management are also similar for each of the four agencies. Each agency is subject to the standards established by the National Archives and Records Administration and the General Services Administration for records maintenance and the disposition of records through transfer to federal records centers. This NAS guideline emphasizes the importance of federal statistical agencies’ coordinating with each other as well as with state, local, foreign, and international statistical agencies when appropriate. The guidelines indicate that the most important aspect of coordination among federal agencies is the sharing of data. The statistical agencies have been active in recommending and supporting efforts to enhance data sharing. For example, for the past several years, the Statistics 2000 task force—composed of members from the major statistical agencies—has worked with OMB and Congress in developing proposals for enhanced data sharing. However, we found that data sharing among federal agencies was limited by the provisions designed to protect the confidentiality of individual data providers. The guideline also states that federal agencies should, when possible and appropriate, cooperate with state and local statistical agencies in the provision of subnational data. We found that the four agencies cooperated with state and local governments to the extent necessary to obtain the subnational data they needed. “Data sharing and statistical uses of administrative records make a statistical agency more effective as well as efficient.” The issue of data sharing among federal agencies for statistical purposes has been a long-standing and complicated problem. Because the federal statistical system is decentralized, different agencies are sometimes responsible for the various stages of statistics production. For example, Census conducts the Current Population Survey, which is the source of the nation’s monthly unemployment estimates, but BLS calculates and releases these estimates. Decentralization also results in different agencies’ obtaining data from the same source; for instance, both Census and the Department of Agriculture survey farm owners. However, agency confidentiality provisions discussed earlier that permit data to be seen only by the employees of a single agency present a formidable barrier to meeting the data sharing envisioned by the NAS guideline. In some instances, to comply with confidentiality requirements, agencies must duplicate the work being done by other agencies. For example, the National Agricultural Statistics Service of the Department of Agriculture must compile its own list of farms because it does not have access to the list of farms compiled by Census for conducting the agricultural census. Similarly, other agencies are not allowed access to Census’ Standard Statistical Establishment List for statistical sampling purposes. Because of provisions limiting access to Census records, other statistical agencies at times have had only limited access to data the agencies had paid Census to collect. While BLS and BEA have recently been allowed more access to these data from the Census Bureau, the problem still exists for other statistical agencies, including NCHS. Over the past decade, OMB has sought legislative changes that would allow greater sharing of data and information on data sources among agencies, but its efforts have met with little success. The Paperwork Reduction Act of 1980 gave the Director of OMB the authority to direct a statistical agency to share information it had collected with another statistical agency. However, this authority was limited since it did not apply to information that was covered by laws prohibiting disclosure outside the collecting agency. In the early 1980s, the statistical agencies, under OMB’s leadership, tried to further enable federal statistical agencies to share data. They attempted to synthesize, in a single bill, a set of confidentiality policies that could be applied consistently to all federal agencies or their components that collected data for statistical purposes. This effort, known as the “statistical enclave” bill, would have allowed statistical agencies to exchange information under specific controls intended to preserve the confidentiality of the data providers. A bill was introduced in Congress but was not enacted. During the Bush administration, OMB drafted legislation that would have permitted disclosure of information to statistical agencies on a case-by-case basis and only for statistical purposes. The legislation was not introduced in Congress. Some recent laws that established new statistical agencies or data requirements do permit data sharing among federal statistical agencies. The confidentiality provisions of the laws that created the National Agricultural Statistics Service and the National Center for Education Statistics allow these agencies to share their data with other agencies as long as confidentiality is maintained. The National Agricultural Statistics Service, for example, has used its statutory authority to facilitate data exchange agreements with Census. Similarly, to improve the quality of data on foreign direct investment in the United States, the Foreign Direct Investment and International Financial Data Improvements Act of 1990 required BEA and Census to share data and required BEA to provide data to BLS to develop establishment-level information on foreign direct investment in the United States. The act stipulated that the agencies maintain the confidentiality of data providers. The National Performance Review (NPR) recommended the elimination of legislative barriers to the exchange of business data among federal statistical agencies, and we agree with this recommendation. The NPR recommendation does not address the sharing of information on individuals. The NAS guideline on data sharing does not distinguish between data on businesses and data on individuals. Some officials of statistical agencies and Members of Congress, however, have argued that a distinction should be made between the sharing of business data and the sharing of personal data about individuals. They note that breaches of confidentiality protection for personal information may be more serious. “When possible and appropriate, federal statistical agencies should cooperate with state and local statistical agencies in the provision of data for subnational areas.” Each of the four agencies had cooperative arrangements with state and local governments for obtaining and disseminating statistical data. However, the extent and nature of these relationships differed by agency. BEA received most of the data necessary for its estimates on the domestic economy from other federal agencies and, as a result, had less direct contact with state agencies. BEA’s contacts with state and local governments were entirely focused on data dissemination. BEA provides state and county personal income estimates to over 200 state offices that disseminate the data to users within each state. BEA also makes its long-term regional projections of employment available before they are finalized to state planning offices to aid them in preparing their own projections. Census has extensive contact with state and local governments to cooperate in both disseminating and obtaining data. Census makes data available to state and local governments through designated State Data Centers at state statistical agencies or universities. Census also relies heavily upon state governments as data sources for data needed for population estimates, apart from the decennial census, and obtains financial and employment data from state and local governments for the economic census (including the Census of Governments) as well as for current economic reports. It also obtains comments from state and local governments on preliminary decennial census counts. Census does not, however, provide funding to state and local governments for any of the assistance they provide. NCHS has extensive contact with states to cooperate in collecting and disseminating health statistics. NCHS relies heavily on states for health-related information from birth, death, and marriage certificates. In 1995, NCHS provided $12.9 million to states to support their health statistical systems. NCHS also works with the states to develop designated state centers for health statistics that collect and disseminate data, but it does not provide direct funding for these centers. BLS has had extensive contacts with states since 1917 when BLS inaugurated its current employment statistics program. This program encouraged states to develop their own statistical offices to standardize, increase coverage of, and prevent duplication of data on the part of federal and state governments. BLS relies on states to collect data for the Labor Market Information program and the Occupational Safety and Health Statistics program. BLS provides guidance, training, and federal funds for operational expenses. BLS’ fiscal year 1995 budget proposed purchasing $80.8 million in statistical services from state and local governments. “Circumstances of different agencies may govern the exact form independence takes. Some aspects of independence, not all of which are required, are the following: — independence mandated in organic legislation or encouraged by organizational structure. In essence, a statistical agency must be distinct from the enforcement and policy-making activities carried out by the department in which the agency is located. To be credible, a statistical agency must clearly be impartial. It must avoid even the appearance that its collection and reporting of data might be manipulated for political purposes or that individually identifiable data might be turned over for administrative, regulatory, or enforcement purposes. — independence of the agency head and recognition that he or she should be professionally qualified. Appointment by the President with approval by the Senate, for a specific term not coincident with that of the administration, strengthens the independence of an agency head. Direct access to the secretary of the department or head of the independent agency in which the statistical agency is located is important. — broad authority over scope, content, and frequency of data collected, compiled, or published. Most statistical agencies have broad authority, limited by budgetary restraints, departmental pressures, Office of Management and Budget (OMB) review, and congressional mandates. — primary authority for selection and promotion of professional staff. — recognition by policy officials outside the statistical agency of its authority to release statistical information without prior clearance. — authority for statistical agency heads and qualified staff to speak on the agency’s statistical program before Congress, with congressional staff, and before public bodies. — adherence to predetermined schedules in public release of important economic or other indicator data to prevent manipulation of release dates for political purposes. — maintenance of a clear distinction between the release of statistical information and the policy interpretations of such statements by the secretary of the department, the President, or others.” Since the guideline states that agencies need not meet all the aspects to be independent, we generally examined how each agency safeguards its independence. We found that for each agency laws and/or regulations existed to protect the agency’s independence. However, we found that BEA has had problems in one of the most important aspects of this guideline—avoiding the appearance that its data are subject to manipulation. Although we found no evidence that BEA’s data have been subject to political manipulation, BEA at times has had to address allegations that the data were politically tainted. Legislative mandates and organizational placement afford a degree of independence to each of the four agencies. Each agency is organizationally distinct from its department’s enforcement and policymaking activities. Officials from each of the four agencies told us that the agencies were not directly involved in their respective department’s policymaking or program implementation. However, the agencies differ in their organizational placement within their parent departments, ranging from BLS at the highest organizational level to NCHS several levels lower. We were unable to establish whether the level of organizational placement affected the independence of the four statistical agencies. The BLS Commissioner and the Census Director are appointed by the president and confirmed by the Senate, while the directors of BEA and NCHS are appointed within their respective departments and are not subject to Senate confirmation. The BLS Commissioner reports directly to the Secretary of Labor. Census and BEA are in the Economics and Statistics Administration of the Commerce Department, and their directors report to the Under Secretary in charge of that Administration. NCHS is a division of the Centers for Disease Control and Prevention of the Public Health Service, which are all within the Department of Health and Human Services. From its inception in 1977 until 1987, NCHS was placed in the Office of the Assistant Secretary for Health. Some observers argue that a statistical agency is more appropriately placed at an assistant secretary level, primarily because this is a higher level within the department and can exercise more budgetary control. We were unable to determine the amount of access the four agency heads had to the secretaries of the departments in which their agencies are located. According to members of the CNSTAT panel that wrote the guidelines, BLS served as a model for CNSTAT in fashioning those aspects of the guideline dealing with the process for appointing agency heads. The BLS Commissioner is appointed by the president, confirmed by the Senate, and has a 4-year term, which is renewable. The fact that the Commissioner can be reappointed has helped BLS maintain its continuity of leadership over the years. The previous Commissioner, who was appointed in 1979, served three terms until December 1991. Since its inception in 1884, BLS has had only 11 commissioners. The Census Director is appointed by the president and confirmed by the Senate, but the term traditionally has been concurrent with administrations, and the director has served at the “pleasure of the President.” The Director of NCHS is a career position and not a presidential appointment. The BEA Director also is a career position and is appointed by the Under Secretary of Commerce for Economic Affairs. Although the NAS guideline indicates that independence is best ensured when a statistical agency head is appointed by the president and confirmed by the Senate, BEA and NCHS have benefited from the continuity of having career directors, particularly BEA. Throughout its history, BEA has had stable leadership from career civil servants who have been experts in the field of economic statistics. BEA’s first Director was also Director of BEA’s predecessor, the Office of Business Economics, and he served from 1950 to 1964. The second BEA Director served from 1964 to 1985. The third Director served until 1992, and the fourth Director, who left office this year, previously served as Deputy Director. In contrast to BEA, the recent experiences of Census and BLS illustrate that presidential appointment and confirmation procedures can take a year or longer, leaving an agency without a formal head for extended periods of time. For example, for the last 15 years Census has had an acting director for 42 months (23 percent of the time); in the last 5 years, Census has had an acting director for 23 months (38 percent of the time). The position of Director of the Census Bureau was vacant from January 1993 until October 1994. Similarly, BLS was without a Commissioner from the previous Commissioner’s retirement in December 1991 until the current Commissioner’s confirmation in October 1993. Currently, BEA and NCHS have acting directors. The recent heads of the four agencies have professional qualifications for their positions. Each had advanced degrees in statistics, economics, or other relevant fields (e.g., medicine). Each also came from a profession that entails extensively dealing with statistical data and measurement issues. Congress is a major user of the statistics produced by all four of the agencies. The heads of the agencies testify before congressional committees about the results of their statistical activities and to explain their budget requests. The agency heads also appear regularly at user conferences to discuss aspects of their statistical programs. As the NAS guideline indicates, one of the ways in which the federal statistical system can guard against the perception of political interference is by carefully controlling the release of important statistical data. The release of economic statistical data produced by Census, BLS, and BEA is governed by OMB Statistical Policy Directive No. 3. (Because NCHS produces health and not economic data, it is not subject to this policy directive.) Statistical Policy Directive No. 3 provides guidance to federal statistical agencies on the compilation, release, and evaluation of principal federal economic indicators. The directive establishes the authority of the agencies to release statistical information without prior clearance or policy interpretations. Procedures established by this directive were designed to ensure that key economic data that are the basis for government and private sector actions and plans are released promptly and on a regular schedule, that no one benefits from “inside” access to the data before they are available to the public, and that there is public confidence in the integrity of the data. Also, the directive does not limit the authority of the agencies over the scope, content, and frequency of economic data collected, compiled, or published. Statistical Policy Directive No. 3 has established procedures to protect against manipulation of the timing or content of major economic data. The procedures are also designed to defend against accusations of political interference. NCHS also controls the release of its data, makes the data available through the National Technical Information Service, and publishes its data in other federal publications (e.g., Census’ Statistical Abstracts). Each December, OMB publishes a schedule of the major economic statistical releases for the next year. For example, OMB has announced release dates for quarterly data, such as the GDP and personal income, before the beginning of each calendar year. The agencies responsible for economic statistics provide the information on release schedules to OMB in accordance with the directive. Because most major federal statistics are released according to a set schedule, the four statistical agencies do not need to seek clearance from policy officials in their respective departments. Similarly, these release schedules help to maintain a distinction between the four agencies’ statistical releases and the policy interpretations of the statistics by department or administration officials. The four agencies, for the most part, adhere to the other aspects of this guideline. According to officials from each of the agencies, their agencies have some authority over the scope, content, and frequency of data collection, compilation, or publication. However, this authority is limited by budgetary constraints and federal regulations, such as those intended to reduce paperwork burdens on businesses and individuals. Officials from the four agencies also noted that the heads of their agencies had primary authority for selection and promotion of professional staff. Data such as those issued by the four agencies shed light on economic and social conditions prevailing in the country. The press and public use these data as indicators of the impact of the policies of the administration in office. Political leaders recognize this impact and have occasionally considered attempting to control the release of statistical data in advantageous ways. It is therefore important that the data released by statistical agencies not be manipulated for political purposes nor tainted by a perception that such manipulation may have occurred. However, we noted in our 1993 report that some BEA and BLS actions may have contributed to the perception of interference. In this 1993 report, we examined how BEA had come to be falsely accused of manipulating economic data and how it dealt with these allegations. The incident began in October 1991 when an article appearing in Barron’s alleged that BEA, in order to inflate the first quarter 1991 GDP for political purposes, did not incorporate BLS’ downward revision of employment levels into its estimates of state personal income growth. Another Barron’s article appeared in December 1991 asserting that BEA increased other components of the GDP to ensure that there was no economic impact from the employment revision in the GDP. Through the rest of 1991 and 1992, the press continued to raise questions and concerns about the integrity and accuracy of BEA’s economic statistics as well as BLS employment data. Our 1993 review revealed no evidence of political interference or manipulation of the first quarter GDP estimates. We found that BEA had properly incorporated employment revisions in its GDP estimates. We also noted that both BEA and BLS followed their standard data release policies and that the integrity of the GDP statistics was sound. However, we concluded that BEA had not adequately publicly documented or explained its procedures for incorporating employment data into its GDP estimates. We also concluded that BEA had not responded to the allegations when they first occurred, which fueled suspicions that the estimated GDP had been manipulated. We recommended that BEA formulate a strategy to provide better explanation and documentation of its procedures to general users and assure Congress and the general public of the integrity and credibility of its estimates. Fulfilling this recommendation in May 1993, the Director of BEA forwarded to the Secretary of Commerce “A Strategy to Improve the Perceived Integrity of BEA’s Estimates.” This strategy calls for BEA to communicate more clearly and widely about technical factors affecting its estimates through a combination of new technical notes, testimony, briefings, and availability of the Director to talk with the media. This strategy is to include greater communication about BEA’s procedures and safeguards to protect the independence and integrity of its statistical estimates. All four statistical agencies generally followed most aspects of the NAS guidelines discussed in this report. Each agency had a clear and well-defined mission and procedures designed to enhance cooperation with data users. Each agency had procedures to maintain the confidentiality of data providers, inform respondents of data collection rights and uses of the data, and minimize the time and effort asked of respondents. In addition, each agency was open with data users in describing the statistics available, methodology used, and related information. We found that the four agencies had policies that generally provided for the dissemination and preservation of their data. One of the NAS guidelines calls for coordination among federal statistical agencies. Although the four agencies generally followed this guideline, coordination among federal agencies was sometimes hampered by legal restrictions designed to protect the confidentiality of data providers. OMB and the statistical agencies have unsuccessfully sought legislative changes that would lessen data-sharing restrictions among federal agencies. Finally, while each agency has policies and procedures to ensure its independent authority to release statistical information, we found that a statistical agency can sometimes communicate data in such a way that may leave users with the misperception that the data had been manipulated for political purposes. NAS’ guidelines focused on the principles and practices that NAS determined were essential for the effective operation of federal statistical agencies. However, these agencies do not carry out their statistical activities in isolation but as part of an interdependent federal statistical system. An interdependent system requires good coordination to operate effectively. Such coordination is especially important considering funding limitations faced by all federal agencies. Legislation requires that OMB, among other responsibilities for the statistical system, coordinate the budgets of the statistical agencies to ensure that the budgets conform to governmentwide statistical priorities. Many of the agencies in the federal statistical system produce statistics to aid only in the administration of mission-related programs for which they are responsible. However, several, including the four agencies that are the focus of this report, produce statistics as their primary missions. Since no one agency is responsible for the collection and production of all of the nation’s statistical needs, agencies often must work together to ensure that these statistical needs are met efficiently. Thus, agencies that collect information in a particular statistical area often must coordinate with the agencies that analyze and disseminate this information. For example, BLS relies on Census to conduct the monthly Current Population Survey from which BLS derives monthly unemployment statistics. Similarly, although the U.S. Customs Service collects information on the country’s imports and exports, Census is responsible for analyzing and disseminating this information as the nation’s merchandise trade statistics. The agencies of the federal statistical system also must share the limited funds available for performing statistical activities. The financial interdependence of the federal statistical system is illustrated by the flow of funds among the four agencies and between these and other agencies throughout the government. For example, NCHS pays Census to conduct NCHS’ National Health Interview Survey, which is a source of much of the health data that NCHS issues. Similarly, BLS pays Census for a major part of the cost of the Current Population Survey, which BLS uses to produce unemployment estimates. BEA relies greatly on the data provided by BLS, Census, and other agencies to produce the National Income and Product Accounts. OMB estimated in the President’s 1995 budget that federal agencies provided $467 million to the federal statistical agencies through reimbursements for statistical work, such as conducting surveys. This amount represents about 15 percent of total federal funding for the 72 statistical agencies. Moreover, these statistical agencies were collectively budgeted $232.5 million, which is 9.1 percent of their total direct funding, to purchase, through reimbursable agreements, statistical services from each other. Figures 3.1 and 3.2 show fiscal year 1995 reimbursable services and purchases of statistical data, respectively, as a percentage of total funding among the four agencies discussed in this report. In the past few years, limited funding has been available for all statistical activities, and, as discussed earlier, some statistical agencies reimburse other agencies for performing statistical services. Figure 3.3 shows actual budgets for all federal statistical activities, including decennial censuses, for the period from 1981 through 1995 in current dollars for the year when the budgets were approved and in constant 1995 dollars to adjust for inflation over time. Figure 3.4 shows the same information, excluding the 10-year cycle of spending for decennial censuses, which peaks during the year the census is conducted. (The 10-year cycle of the decennial Census of Population and Housing is not the only periodic cycle in the data. Several other Census programs, such as the Economic Census and the Census of Agriculture, are conducted on a 5-year cycle, including 1992.) Funding for federal statistical activities, excluding the 10-year large spending cycle for decennial censuses, has increased in the past 10 years in constant dollars, from $1,947 million in 1986 to an estimated $2,508 million in 1995. However, the increase was less than the amount of funding that federal statistical agency officials believed would have been needed to adequately maintain the federal statistical system, given the changes in the economy and society. In 1990, the Bush administration introduced the Economics Statistics Initiative to improve the coverage and quality of economic statistics. In fiscal years 1993 and 1994, Census, BLS, and BEA collectively received 51 percent of their requests for funds for Economics Statistics Initiative work. In its 1993 budget message, the Bush administration noted that, because parts of the Economics Statistics Initiative were not funded by Congress, some statistical activities had to absorb reductions in order to provide funding for limited improvements in economic statistics. The message went on to state that further improvements in economic statistics would require more resources. “Our measurements of economic performance are perforated with gaps in areas of vital importance, areas of public policy concern are poorly measured if measured at all, the data gathering system imposes too great a workload on both the agencies that gather the data and the firms that provide it, and the resulting product goes underutilized in a world in which timely and accurate information is often the key to competitive business success.” As a consequence, the budget proposed increases of $8.6 million for Census, $17.2 million for BLS (including $5.2 million for its 10-year CPI revision), $8.1 million for BEA, and $4.4 million for other statistical agencies. The two administrations requested a total of $94 million for fiscal years 1990 through 1994 for improving the quality and coverage of economic statistics; Congress appropriated about $49 million. Because agencies often share responsibilities for the production of federal statistics, it is important that they closely coordinate their efforts to the extent permitted by law so that the quality of the end statistical product is maintained. It is also important that the efforts of these agencies be coordinated in order to avoid duplication and to ensure that the limited funding available for statistical activities is used as effectively and efficiently as possible. The Paperwork Reduction Act of 1980 assigned responsibility for coordination of the federal statistical system to OMB. Budget reviews are one way to ensure such coordination among statistical agencies. The Statistical Policy Branch in OMB is responsible for, among other responsibilities, coordinating the budgets of these agencies. The Branch prepares a consolidated report on budgets for agency statistical programs that have recently been submitted to Congress after it has begun acting on individual agency budgets. In many respects, this is due to the difficulty in determining resources allocated for statistical programs in the 60 or so agencies that are not primarily statistical in character. Consequently, Congress has not had a current consolidated picture of federal statistical activities during its budget deliberations that would provide a basis for setting priorities and allocating funding accordingly. The Paperwork Reduction Act of 1995 reauthorizes OMB’s budget coordination responsibilities for statistical activities. OMB and its predecessor, the Bureau of the Budget, have been responsible for oversight of the federal statistical system by coordinating federal statistical agency budgets for decades. During the 1960s, OMB’s Statistical Policy and Coordination Office had a staff of about 50 and was responsible for setting statistical policy and budgetary priorities. The broad-based, detailed budget reviews by the Bureau of the Budget, and later by OMB, were in part intended to determine if agency budgets supported these priorities. OMB also prepared an analysis of budgetary needs for the federal statistical system that was included in the Presidents’ budgets when they were submitted to Congress in January every year. The Statistical Policy and Coordination Office at OMB was abolished in 1977, and its functions and some staff were transferred to the Department of Commerce. While at the Department of Commerce, staff attended OMB decision sessions, but they had little input in decisionmaking. Before the functions were transferred, the office employed 25 staff. In 1980, the Paperwork Reduction Act returned to OMB the statistical policy and coordination functions and the staff to carry them out. Currently, OMB’s Statistical Policy Branch is responsible for these functions and has a professional staff of five. The act does not determine the number of employees needed to carry out these functions. The former broad-based, crosscutting review of statistical programs was not part of the budget process after the 1980 act was implemented. The need for strong oversight and coordination of the decentralized federal statistical system was recognized in law by the enactment of the Paperwork Reduction Act of 1980. The act created the Office of Information and Regulatory Affairs (OIRA) in OMB and assigned the Director of OMB and the Administrator of OIRA the responsibility for overseeing the federal statistical system and coordinating its activities. OIRA’s Statistical Policy Branch functions include the following: • developing and reviewing long-range plans for the improved coordination and performance of federal statistical activities and programs; • reviewing agencies’ budget proposals to ensure that the proposals are consistent with the plans; • coordinating the functions of the federal government that concern gathering, interpreting, and disseminating statistical information; • developing and implementing governmentwide policies, principles, standards, and guidelines concerning data sources, data collection procedures and methods, and data dissemination; • evaluating statistical program performance and agency compliance with governmentwide policies, principles, standards, and guidelines; and integrating these functions with other information resources management functions of the government. The Statistical Policy Branch is headed by a chief statistician who is appointed by the Administrator of OIRA. The Statistical Policy Branch currently has a professional staff of four working with the chief statistician, whose professional responsibilities are divided as follows: • An economist is responsible for economic statistics, statistical policy directives, standard industrial classification, standard occupational classification, and the definition of poverty and serves as the BEA paperwork clearance desk officer. • A mathematical statistician is responsible for methodology; natural resource, energy, environment, and agriculture statistics; and statistical legislation and serves as the Bureau of the Census’ economic surveys paperwork clearance desk officer. • A policy analyst is responsible for international statistical coordination; health and education statistics; the Survey of Income and Program Participation; the Branch’s annual report, Statistical Programs of the U.S. Government; a schedule of release dates for principal economic indicators; and classification of race and ethnicity. • A statistician is responsible for demographic statistics, the decennial census, metropolitan areas, and the Federal Committee on Statistical Methodology and serves as Census’ demographic surveys paperwork clearance desk officer. “The greatest industrial nation in the world with the largest, most complex society and economy now lacks effective capacity for central coordination of its statistical activities. This is a crippling loss since ours is the most decentralized, if not fragmented, statistical system in the industrial world.” “Economic policy will require the best possible measure of the factors critical for growth and an awareness of areas where uncertainty prevails. Serving the needs of policy makers in a time of change will require a coordinated response of the Nation’s statistical agencies. The present management of the statistical agencies makes such a response difficult.” In a 1991 report, NAS noted that in addition to budget and staffing constraints, the interagency coordination of the federal statistical system in the previous decade had suffered a reduction in its ability to draw on and integrate information from a range of databases, particularly administrative records, and a lag in the reporting of the classification of business categories, such as the service industry. NAS concluded that the results of this reduction and lag were reductions in the timeliness, quantity, and quality of policy-relevant data and an inaccurate portrayal of the nation’s economy. In a 1992 report, the Congressional Research Service (CRS) came to a similar conclusion. It characterized the coordination of the federal statistical system as “an opera without a conductor.” CRS stated that one of the major barriers to coordinating the statistical system was OMB’s insufficient funding to maintain adequate staff to carry out this coordination responsibility. CRS noted that OMB’s responsibilities for the oversight and coordination of the statistical system and those for the reduction of paperwork competed against other OMB responsibilities for funding and staff. As part of its responsibility for coordinating the federal statistical system, the Statistical Policy Branch is to coordinate the statistical agencies’ budget requests, which it does in detail for the 10 largest statistical agencies. The budget process can be one of the primary tools for ensuring that the nation’s statistical needs are being addressed effectively and efficiently by federal statistical agencies. However, according to published studies of OMB’s coordination role, including those by CRS, the Office of Technology Assessment, and NAS, the Branch does not do the detailed, systemwide budget reviews required by the act. These reviews are to enable OMB to determine if the budgetary resources available for statistical programs are being directed where they are most needed. The Branch’s current role in coordinating federal statistical agency budgets consists of reviewing budget submissions from the major statistical agencies and coordinating with OMB Resource Management Offices responsible for individual agency accounts to promote compliance with the administration’s funding priorities for statistical agencies. The Branch also reviews some other budget requests on an ad hoc basis determined by the importance of the statistical product being funded. For example, the Branch reviews budget requests relevant to data feeding into National Income and Product Accounts estimates. The Branch also compiles agency budget requests for an annual report to Congress on funding for statistical activities. However, this report is basically a compilation of the budgets for the statistical agencies approved by Congress and the current budget requests that the administration sent to Congress for statistical activities. The report is not the product of a systematic review of statistical activities. Since the Branch was established in 1981, it has delivered the report several months after the individual statistical agencies have submitted their budgets to OMB and then to Congress. According to OMB officials, the delay is attributable to delays in getting necessary data from agencies whose statistical functions are incorporated in other programs. The officials note that such data are readily available for the approximately 10 agencies that are the major components of the federal statistical system. Thus, congressional committee deliberations have already begun or even, as in fiscal year 1995, have ended before Congress has received the report. Therefore, Congress has not had a current, comprehensive picture of all resources the administration has requested for statistical activities during budget deliberations. As a result, Congress is handicapped in its ability to direct funding where it is most needed, particularly with respect to funding for agencies that are not among the major statistical agencies. As noted earlier, for a staff of five, the Statistical Policy Branch has broad responsibilities. Consequently, according to Branch officials, the Statistical Policy Branch is sometimes required to adjust its priorities on the basis of such factors as the imposition of new administration initiatives or a general shortage of staff. Statistical Policy Branch officials told us that resources for federal statistical activities could be allocated more effectively if a strengthened process were instituted for reviewing statistical agency budgets. The officials said that they would like OMB to reinstate its crosscutting review of statistical agency budget requests to help the administration make any necessary reallocation of resources within the federal statistical system. Until 1978, such a review appeared when the president’s budget was submitted to Congress. As the federal government continues to face budget constraints, it is likely that there will be an increasing need to reallocate the limited funding available for statistical activities. In a speech at a recent symposium sponsored by BEA, the Vice Chairman of the Federal Reserve Board called for a reallocation of funding for statistical activities.He noted that as a policymaker, he recognized the importance of accurate statistics on the economy. He went on to state that reallocating funding resources could help close some of the gaps in economic statistics, particularly gaps in statistics on the increasingly important service sector. OMB is currently settling into a major reorganization, OMB 2000, that is partly designed to encourage crosscutting reviews of federal programs. It remains to be seen whether OMB 2000 or other actions will result in the Statistical Policy Branch leading a crosscutting review that would coordinate the analysis of statistical agency budget requests. The Paperwork Reduction Act of 1995 reauthorizes (1) OMB review of statistical agencies’ budget proposals to ensure that the proposals are consistent with long-range plans and (2) the development of an annual report to Congress summarizing and analyzing statistical activities. However, the act does not necessarily provide additional staff to OMB to perform these responsibilities. The federal statistical system is a collection of agencies with interrelated responsibilities for meeting the nation’s statistical needs. For the federal statistical agencies to work effectively, it is important that they closely coordinate their activities. The Paperwork Reduction Act of 1980 assigned OMB the responsibility for, among other things, coordinating the federal statistical system. The act specifically directed OMB to review statistical agencies’ budget submissions to ensure that the proposals are consistent with systemwide priorities. OMB’s Statistical Policy Branch currently reviews the major statistical agencies’ budget submissions. It also prepares a summary of individual agencies’ statistical budgets as submitted in the president’s budget to Congress. Since the Branch was established in 1981, the report has been issued after Congress has already started to determine the agencies’ budgets. To adequately coordinate the systemwide activities of federal statistical agencies, OIRA would also need to closely review budget submissions of the smaller statistical agencies before they are sent to Congress. Such reviews could identify such inefficiencies as duplication of effort and help to ensure that the limited federal funds for statistical activities are spent as effectively as possible. OMB’s current reorganization is intended to improve its ability to review federal programs. Recent legislation also addresses OMB’s responsibilities. Because of the reorganization, recent legislation, and the fact that we did not analyze the many other priorities competing for OMB’s attention and resources, we are not making any recommendations in this report.
Pursuant to a congressional request, GAO evaluated the performance of the Bureaus of the Census, Economic Analysis, and Labor Statistics, and the National Center for Health Statistics based on selected National Academy of Sciences (NAS) guidelines. GAO also provided information on the Office of Management and Budget's (OMB) role in coordinating and overseeing the statistical activities of the 72 agencies that constitute the federal statistical system. GAO found that: (1) the four agencies adhered, with only minor exceptions, to five of the seven selected NAS guidelines; (2) the NAS guidelines emphasize the importance of a statistical agency maintaining the credibility of its data and that it be perceived as free from political interference and policy advocacy; (3) coordination and sharing between federal, state, and local statistical agencies increased their effectiveness and efficiency; (4) in general, each agency had a clearly defined and well-accepted mission statement, cooperated with data users, treated data providers fairly, openly described to users all aspects of its data, and widely disseminated its data; (5) the agencies did not fully adhere to the guideline on protecting their independence from political influence because they did not always sufficiently communicate their procedures to data users; (6) the agencies could not fully coordinate with other statistical agencies because of statute limitations to protect data providers' confidentiality; (7) OMB oversight and coordination of agencies' statistical activities is limited by a lack of staff resources; and (8) OMB is revising its formal process for reviewing statistical agencies' budgets in order to allocate its resources for coordination more effectively.
In November 1993, Congress enacted a law concerning homosexual conduct in the armed forces and required the Secretary of Defense to prescribe regulations to implement that policy. Following the enactment of the law, DOD issued its implementing guidance, including Department of Defense Instruction 1304.26, Qualification Standards for Enlistment, Appointment, and Induction. Under that instruction, applicants for enlistment, appointment, or induction shall not be asked or required to reveal their sexual orientation, nor shall they be asked to reveal whether they have engaged in homosexual conduct, unless independent evidence is received indicating that an applicant engaged in such conduct or the applicant volunteers a statement that he or she is homosexual or bisexual, or words to that effect. This is generally referred to as the “Don’t Ask, Don’t Tell” policy. In exchange for the services’ silence (“don’t ask”) about a person’s homosexuality prior to induction, gay and lesbian servicemembers, as a condition of continued service, agree to silence (“don’t tell”) about this aspect of their lives. According to our analysis of DMDC data, 3,664 active duty servicemembers were separated under the homosexual conduct policy from fiscal years 2004 through 2009. (See table 1.) This figure represents servicemembers who were on active duty at the time of their separation, including members of the Reserve or National Guard components of the military services who were on active duty for 31 or more consecutive days before their dates of separation. These servicemembers are included in the figure because according to DMDC, a servicemember in the Reserves or National Guard who was separated after at least 31 consecutive days of active duty service is considered to be an active duty separation. Of the 3,664 servicemembers separated from fiscal years 2004 through 2009, DOD granted “honorable” separations to 2,084 members (57 percent), “general (under honorable conditions)” separations to 369 servicemembers (10 percent), and “under other than honorable conditions” separations to 95 servicemembers (3 percent). DOD classified the separation of 2 servicemembers (less than 1 percent) as “bad conduct,” which is a type of punitive separation applicable to enlisted personnel only. DOD also granted “uncharacterized” or entry-level separations to 1,037 servicemembers (28 percent), and classified 77 separations (2 percent) as “unknown or not applicable” for servicemembers separated under the policy. The following figures present demographic breakdowns for separated servicemembers. Figure 1 shows the percentage of servicemembers separated under DOD’s homosexual conduct policy from fiscal years 2004 through 2009 by race, and figure 2 shows other demographic information for these servicemembers, including rank, length of service upon separation, gender, and military branch. In 2005, we reported on the number of servicemembers separated under the policy who held skills in critical occupations and important foreign languages and the costs of recruiting and training replacements for servicemembers separated under the homosexual conduct policy for the period covering fiscal year 1994 through fiscal year 2003. However, the information in the 2005 report cannot be compared to the information in this report for two reasons. First, for this report, we asked the services to provide the most current and complete guidance to help us determine criteria for describing critical occupations and important foreign languages. The services provided enlistment bonus lists, critical skills retention bonus lists, service-specific critical occupations lists, and foreign language proficiency bonus lists. We have added these criteria in order to provide a more comprehensive picture of how the services described critical occupations and important foreign languages from fiscal years 2004 through 2009. Second, in 2005, the services were unable to provide us with the training costs of Marine Corps personnel, the training costs of the medical professionals for each of the services, and the recruiting and training costs of each service’s officers. For the current report, the Marine Corps provided data on the cost to train its personnel; the services provided data on the cost to train medical professionals; and the Air Force, Navy, and Marine Corps provided data on the cost of recruiting officers. The Army was not able to provide data on the cost of officer recruiting in time for the data to be included in our analyses. The Army, Air Force, and Marine Corps provided data on the cost of training officers. However, we did not include the cost of training Navy officers because the Navy provided data that were not specific to the occupational specialties of the separated officers. In order to be consistent with our methodology of calculating training cost calculations that are specific to the occupational specialties of separated servicemembers, we did not include the incomplete Navy data. In addition, in 2005, DOD was not able to provide us with information on the administrative costs of separating servicemembers under the homosexual conduct policy. For the current report, the Air Force, the Army, and the Marine Corps provided us with this information. The Navy explained that it was not able to provide this information because changes in separation processes from fiscal years 2004 through 2009 prevented Navy officials from providing an accurate administrative cost estimate in time for the data to be included in our analyses. The analyses in this report were current as of November 30, 2010. As a result, the personnel and cost data provided in the 2005 report are not comparable to the information provided in this report. Based on our analysis of DMDC data, 3,664 servicemembers were separated under the homosexual conduct policy from fiscal years 2004 through 2009, and based on our analysis of information provided by the services, 1,458 (40 percent) of these servicemembers held skills in a critical occupation, an important foreign language, or both, as determined by us and the services. Servicemembers with critical occupations and important foreign language skills are not necessarily mutually exclusive groups because some critical occupations, such as cryptologic linguists and interrogators, require an important foreign language skill. According to our analysis, 7 servicemembers held a critical occupation and also held an important foreign language skill. Based on our analysis of DMDC data, of the 3,664 servicemembers who were discharged under the homosexual conduct policy from fiscal years 2004 through 2009, 1,442 (39 percent) of them held skills in critical occupations. Based on interviews with service officials, we and the services determined for the purposes of this report that an occupation was “critical” if it received a bonus under DOD’s Enlistment Bonus program, Accession Bonus for New Officers in Critical Skills program, Selective Reenlistment Bonus program, or Critical Skills Retention Bonus program. These bonus programs provide monetary incentives to individuals to help the services maintain adequate numbers of personnel in designated critical occupations. We also used service-specific critical occupations lists to determine critical occupations, such as the Air Force Stressed Career Fields List, the Marine Top Ten Critical Occupations List, and the list of occupations deemed critical under the Marine 202K Sustainment Plan. Table 2 shows, by service, a breakdown of the 1,442 servicemembers who held critical occupations and were separated from fiscal years 2004 through 2009. The reported number of separated Navy and Air Force servicemembers who held skills in critical occupations could be an underestimation. The Navy was not able to provide the information necessary to determine whether separated Navy servicemembers held occupations on the enlistment bonus lists because of the manner in which the Navy assigns occupational specialties to its recruits. Also, while the Navy does offer accession bonuses to new officers, Navy officials could not determine which bonuses were offered during the fiscal years of our study. Thus, we could not include any Navy occupations that were eligible for Accession Bonuses for New Officers in Critical Skills. While the Air Force was able to provide the occupational specialties eligible for enlistment bonuses from fiscal years 2006 through 2009, it was unable to provide the occupational specialties eligible for enlistment bonuses in fiscal years 2004 and 2005 because Air Force data were incomplete. Of the total population separated under the policy, 625 servicemembers (17 percent) were separated with less than 3 months of military service, 394 servicemembers (11 percent) were separated within 3 to 6 months of military service, 657 servicemembers (18 percent) were separated within 6 months to 1 year of military service, 706 servicemembers (19 percent) were separated within 1 to 2 years of military service, and 1,282 servicemembers (35 percent) were separated with 2 years or more of military service. We analyzed the lengths of service for the 1,442 servicemembers separated under the homosexual conduct policy who held skills in critical occupations from fiscal years 2004 through 2009. Figure 3 shows the amount of time served prior to separation by servicemembers who held skills in critical occupations. (For more detailed information on the length of service of servicemembers separated under the homosexual conduct policy who held skills in critical occupations, see table 18 in app. IV.) Of the 1,442 separated servicemembers who held skills in critical occupations, 148 (10 percent) of them held skills in intelligence-related critical occupations. The services reviewed the critical occupations held by the servicemembers separated under the homosexual conduct policy and designated the critical occupations that they deemed to be intelligence related. Examples of intelligence-related critical occupations include human intelligence collector, cryptologic technician (interpretive), intelligence specialist, and airborne cryptologic language analyst. Table 3 shows a breakdown, by service, of the 148 separated servicemembers who held intelligence-related critical occupations during the 6-year period. Of those separated who held skills in critical occupations, 1,425 were enlisted servicemembers and 17 were officers. Separated servicemembers with critical occupations served an average of 22 months, which is about 26 months less than the typical initial service contract of most enlistees and the typical officer-commissioning contract. As shown in table 4, the most common critical occupations held by separated servicemembers across all services were infantryman and military police. (See table 17 in app. IV for a more detailed list, by service, of the most common occupations held by separated servicemembers.) Based on our analysis of DMDC data, of the 3,664 servicemembers separated for homosexual conduct from fiscal years 2004 through 2009, 23 (less than 1 percent) of them held skills in an important foreign language. Based on interviews, we and the services determined for the purposes of this report that a language was “important” if a financial incentive was provided under the Foreign Language Proficiency Bonus (FLPB) program. This bonus program provides incentives for the acquisition, maintenance, and enhancement of foreign language skills at a particular proficiency level. The FLPB is used to increase strategic language capability throughout DOD by (1) encouraging servicemembers with foreign language proficiency to self-identify and sustain proficiency; (2) providing servicemembers an incentive to acquire foreign language skills, improve foreign language skills, or both; (3) providing servicemembers whose military specialty requires a foreign language with an incentive to expand their proficiency to other foreign languages and dialects; and (4) creating a cadre of language professionals operating at the highest levels of proficiency. Table 5 shows a breakdown across all services of the 23 servicemembers who held important foreign language skills and were separated under the homosexual conduct policy during the 6-year period. Of the 23 servicemembers separated who held skills in an important foreign language, 22 were enlisted servicemembers and 1 was an officer. Separated servicemembers with an important foreign language skill served an average of 26 months, which is about 22 months less than the typical initial service contract of most enlistees and the typical officer- commissioning contract. To assess listening, reading, and speaking proficiencies, DOD uses an 11-point scale that represents the degree of competence in the language in which a member possesses the highest proficiency. The scale includes numeric values of 00 (no proficiency), 06 (memorized proficiency), 10 (elementary proficiency), 16 (elementary proficiency plus), 20 (limited working proficiency), 26 (limited working proficiency plus), 30 (general professional proficiency), 36 (general professional proficiency plus), 40 (advanced professional proficiency), 46 (advanced professional proficiency plus), and 50 (functionally native proficiency). To receive the FLPB, servicemembers must attain a minimum of 20/20 or higher on the scale in any two modalities (listening, reading, or speaking). As shown in table 6, the most common important language skills held by separated servicemembers were Arabic and Spanish. Of the total population separated under the policy, 625 servicemembers (17 percent) were separated with less than 3 months of military service, 394 servicemembers (11 percent) were separated within 3 to 6 months of military service, 657 servicemembers (18 percent) were separated within 6 months to 1 year of military service, 706 servicemembers (19 percent) were separated within 1 to 2 years of military service, and 1,282 servicemembers (35 percent) were separated with 2 years or more of military service. We analyzed the lengths of service for the 23 servicemembers separated under the homosexual conduct policy who held skills in important foreign languages from fiscal years 2004 through 2009. Figure 4 shows the amount of time served prior to separation by servicemembers who held skills in important foreign languages. (For more detailed information on the length of service of servicemembers separated under the homosexual conduct policy who held skills in important foreign languages, see table 19 in app. IV.) Using available DOD cost data, we calculated that it cost DOD approximately $193.3 million ($52,800 per separation) in constant fiscal year 2009 dollars to separate and replace the 3,664 servicemembers separated under the homosexual conduct policy from fiscal years 2004 through 2009. This figure represents about $185.6 million in recruiting and training costs for replacing servicemembers separated under the policy and about $7.7 million in certain administrative costs for which we were able to obtain data. (See fig. 5 for the services’ cost of administering DOD’s homosexual conduct policy.) In calculating the services’ costs to recruit and train replacements, we used variable costs and excluded fixed costs to the extent possible because, according to service officials, there would likely be no significant increase in fixed costs when recruiting and training a relatively small number of replacement personnel. For example, in fiscal year 2009, the Army separated 195 servicemembers under the homosexual conduct policy. This means that in fiscal year 2009, the Army would have needed to recruit 195 replacements. In that same year, the Army recruited about 70,000 soldiers. Thus, in order to replace the 195 separated servicemembers in fiscal year 2009, the Army would have needed to recruit .003 percent more soldiers than it would have otherwise recruited. According to Army officials, because this .003 percent of additional recruiting represents such a small portion of total recruiting, there would likely be no need to increase recruiting infrastructure or hire more recruiting personnel. Because the services do not use “fixed costs” and “variable costs” as categories in their recruiting and training budgets, we provided each service with a common set of criteria to define these terms, and asked each service to determine the fixed and variable components of their cost data and provide us with variable costs. However, each of the services tracks and maintains data in different ways, which in some cases affected their ability to provide us with only variable costs. For example, while the Army and Air Force were able to provide us with variable recruiting and training costs, the Navy was not able to provide variable recruiting and training costs, and the Marine Corps was not able to provide variable training costs. In these cases, Navy and Marine Corps officials explained that they were not able to provide data with only variable costs because of the way their services track these data. While the Navy and Marine Corps track the total budgets of recruiting and training commands and individual courses, they do not track individual cost elements of these totals. For this reason, they were not able to determine the fixed and variable components of their cost data. To the extent that recruiting and training cost data provided by the services contain fixed costs, this would result in an overestimation of replacement costs. To calculate the administrative cost of carrying out separations, we asked the services to identify the legal and nonlegal processes associated with the separation process and requested data on personnel involved in carrying out these tasks. Using these data and military pay rates, we calculated administrative costs. While the Air Force, Army, and Marine Corps provided us with this information, the Navy did not provide data on the legal and nonlegal processes associated with carrying out separations. The Navy explained that it was not able to provide this information because changes in separation processes from fiscal years 2004 through 2009 prevented Navy officials from providing an accurate administrative cost estimate in time for the data to be included in our analyses. Because the Navy did not provide data on administrative costs, our calculation of these costs is an underestimation of DOD’s likely total administrative costs. ($6,490.4) ($50.1) ($66.6) ($14,056.9) ($10,502.) ($2,901.2) ($21,290.5) ($90,.7) The Air Force provided a single cost that included recruiting and training costs combined. All of the services were able to provide data related to the cost to recruit and train servicemembers. Based on these data, we calculated that it cost DOD about $185.6 million in constant fiscal year 2009 dollars to recruit and train replacements for the 3,664 servicemembers separated under the homosexual conduct policy from fiscal years 2004 through 2009. Our calculation includes the cost to the services to recruit a new servicemember, provide him or her with basic training, and graduate the servicemember from initial skills training in the occupational specialty in which a servicemember had been separated. Our calculation of replacement costs concludes with the end of initial skills training because, according to each of the military services, this is the point in a servicemember’s career at which he or she is considered minimally qualified to perform required tasks within a separated servicemember’s occupational specialty. To the extent possible, we included variable recruiting and training costs in our calculations, such as recruiting bonuses and consumable supplies used by trainees, and excluded fixed costs, such as the cost of recruiting and training infrastructure or recruiter and instructor salaries. This approach was taken because there would likely be no significant increase in fixed costs when recruiting and training a relatively small number of replacement personnel. As shown in table 7, our calculations for the services’ replacement costs amount to about $19.4 million for the Air Force, $39.4 million for the Army, $22.0 million for the Marine Corps, and $104.9 million for the Navy. The Navy recruiting and training cost calculation is larger than the other services’ calculations because according to Navy officials, the Navy recruiting and training cost data contain both fixed and variable costs. The services were able to provide data related to the cost to recruit replacement servicemembers. We calculated that from fiscal year 2004 through 2009, it cost DOD about $25.2 million in constant fiscal year 2009 dollars to recruit replacements for servicemembers separated under the homosexual conduct policy. This calculation represents about 14 percent of the total calculated replacement cost associated with separating servicemembers under DOD’s homosexual conduct policy. Recruiting costs include, but are not limited to, the costs associated with enlistment bonuses; recruit travel; and recruiting support, such as the processing of a recruit’s paperwork. As shown in table 8, the Navy’s cost to recruit replacements was the largest among the services because, according to Navy officials, the Navy included both fixed and variable costs in its recruiting estimates. According to Army and Marine Corps officials, the recruiting cost data provided by the Army and Marine Corps consist of variable costs. In addition, while the Air Force, Navy, and Marine Corps provided data on the cost of recruiting officers, the Army was not able to provide data on the cost of recruiting officers in time for the data to be included in our analyses. The Air Force could not provide disaggregated recruiting and training costs and instead provided a replacement cost estimate that combines variable recruiting and training costs. The services were able to provide data related to the cost to train replacement servicemembers through initial occupational training. We calculated that from fiscal year 2004 through 2009, it cost DOD about $141.0 million in constant fiscal year 2009 dollars to train replacements for servicemembers separated under the homosexual conduct policy. This calculation represents about 76 percent of the total calculated replacement cost associated with separating servicemembers under DOD’s homosexual conduct policy. Costs associated with basic training and initial skills training include, but are not limited to, clothing and equipment, supplies, student travel, administration of courses of instruction, replacement servicemembers’ salaries and benefits during training, and overhead costs associated with training centers. As shown in table 9, there is variation in the size of our calculations of the services’ cost to train replacement servicemembers. For example, the Navy’s cost to train replacements was the largest among the services because the Navy included both fixed and variable costs in its training estimates. Although the Marine Corps included fixed and variable costs in its training estimates, the Navy separated over twice as many servicemembers as the Marine Corps. Moreover, according to the Marine Corps, a significant proportion of its servicemembers’ training is carried out by other services. However, the Marine Corps does not track the cost of training it receives from the other services and therefore could not provide us with comprehensive data on the cost to train Marine Corps personnel. Marine Corps officials explained that the other services that train Marine Corps servicemembers may contribute up to 60 percent of the total cost of training in the occupational specialties held by Marine Corps servicemembers separated under the policy from fiscal years 2004 through 2009. As can be seen in table 9, the Air Force is not included because it could not provide disaggregated recruiting and training costs and instead provided a replacement cost estimate that combines variable recruiting and training costs. While the Army, Air Force, and Marine Corps provided data on the cost of training officers, we did not include the cost of training Navy officers because the Navy provided data that were not specific to the occupational specialties of the separated officers. In order to be consistent with our methodology of calculating training cost calculations that are specific to the occupational specialties of separated servicemembers, we did not include the incomplete Navy data. To the extent that recruiting and training cost data provided by the services contain fixed costs, this would result in an overestimation of replacement costs. However, we were not able to determine the extent of the replacement cost overestimation. The Air Force, Army, and Marine Corps were able to provide estimates on the administrative costs associated with separating servicemembers under DOD’s homosexual conduct policy. The Navy explained that it was not able to provide this information because changes in separation processes from fiscal years 2004 through 2009 prevented Navy officials from providing an accurate administrative cost estimate in time for the data to be included in our analyses. Using the estimates of the Air Force, Army, and Marine Corps, we calculated that from fiscal years 2004 through 2009, it cost DOD about $7.7 million in constant fiscal year 2009 dollars to separate 2,751 servicemembers from the three services under DOD’s homosexual conduct policy. As shown in table 10, our calculation of the services’ administrative costs for implementing the homosexual conduct policy includes two types of costs: legal and nonlegal. Legal administrative costs amounted to about $2.5 million (33 percent) of the total administrative cost, while nonlegal administrative costs amounted to about $5.2 million (67 percent) of the total administrative cost. Legal administrative costs involve the costs associated with the services’ review of homosexual conduct cases. According to the services, the legal costs include paralegal work, attorneys’ counseling of servicemembers, and board hearings. With the exception of the Navy, the services were able to identify approximately 3,700 cases associated with DOD’s homosexual conduct policy from fiscal years 2004 through 2009. These cases include board cases (cases in which a service board and legal officials reviewed a case), nonboard cases (cases in which legal officials reviewed a case, but it was not reviewed by a service board), and unsubstantiated cases (cases in which legal officials reviewed a case, but the case did not result in a separation). Table 11 shows the legal administrative costs by military service and types of cases for the 6-year period. According to the services, the nonlegal costs include commanders’ inquiries, pastoral counseling of servicemembers, and the processing of separation paperwork. As shown in table 12, these activities occur at successive levels of command within and outside of the servicemember’s unit. Because the Navy was not able to provide data on administrative costs in time for the data to be included in our analyses, our calculation of these costs is an underestimation of DOD’s likely total administrative costs. We were not able to determine the extent of the administrative cost underestimation. We provided a draft of this report to DOD for review and comment. DOD did not have any comments on the report. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5257 or merrittz@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. In conducting our review of the Department of Defense’s (DOD) homosexual conduct policy, the scope of our work included active duty separations under the homosexual conduct policy across all of the service components—the Air Force, Army, Marine Corps, and Navy—for the period covering fiscal years 2004 through 2009. We also obtained the total number of Reserve and National Guard servicemembers separated under the policy during the same period of time. However, we did not include Reserve and National Guard servicemembers in our analysis because according to the Defense Manpower Data Center (DMDC), DOD only collects data on separations for homosexual conduct for the active duty members of the Air Force, Army, Marine Corps, and Navy. According to an official with DMDC, the official tracking of separations for homosexual conduct began in 1997, at which time it was decided to include only active duty servicemembers. Data on servicemembers separated under DOD’s homosexual conduct policy were obtained from DMDC and each of the military services and are current as of November 30, 2010. To determine the extent to which servicemembers with skills in critical occupations were separated under DOD’s homosexual conduct policy, we obtained data from DMDC on the occupational specialties held by the servicemembers separated under the policy from fiscal years 2004 through 2009. We interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness and the offices within the services that are responsible for managing occupational specialties and administering bonus programs. Based on interviews, we and the services determined for the purposes of this report that an occupation was “critical” if a financial incentive was provided under any of the enlistment, reenlistment, or retention bonus programs under Department of Defense Instruction 1304.29. This instruction prescribes procedures with regard to Enlistment Bonuses (monetary incentives provided to individuals enlisting in a military service for a period of time and, if applicable, in a specific military skill experiencing critical shortages); Selective Reenlistment Bonuses (monetary incentives provided to individuals to maintain adequate numbers of enlisted personnel in critical skills needed to sustain the career force); Critical Skills Retention Bonuses (monetary incentives provided to individuals to maintain adequate numbers of officers or enlisted personnel with designated critical skills needed to sustain the career force); and Accession Bonuses for New Officers in Critical Skills (monetary incentives to individuals who accept commissions or appointments as an officer and serve on active duty in a military service in a skill the service has designated a critical officer skill). However, the Navy was not able to provide the information necessary to determine whether separated Navy servicemembers held occupations on the enlistment bonus lists because of the manner in which the Navy assigns occupational specialties to its recruits. Also, while the Navy does offer accession bonuses to new officers, Navy officials could not determine which bonuses were offered under Department of Defense Instruction 1304.29 or during the fiscal years of our study. Thus, we could not include any Navy occupations that were eligible for Accession Bonuses for New Officers in Critical Skills. The reported number of separated Navy servicemembers who held skills in critical occupations would be an underestimation. While the Air Force was able to provide the occupational specialties eligible for enlistment bonuses from fiscal years 2006 through 2009, the Air Force was unable to provide the occupational specialties eligible for enlistment bonuses in fiscal years 2004 and 2005 because the Air Force’s data were incomplete. Thus, the reported number of separated Air Force servicemembers who held skills in critical occupations would be underestimated. We used the Army’s Top 25 Priority Occupations Lists in lieu of the Army’s Enlistment Bonus lists because the Army noted that the occupations on the Top 25 Priority Occupations Lists better represent the Army’s critical occupations for enlistment. We also included occupations found on additional lists that the services used to describe critical occupations for certain fiscal years during the period of our review, including the Air Force Stressed Career Fields List (fiscal years 2008 and 2009), the Marine Top Ten Critical Occupations List (fiscal years 2004 through 2009), and the list of occupations deemed critical under the Marine 202K Sustainment Plan (fiscal years 2007 through 2009). We then compared the occupations of the separated servicemembers to our lists of critical occupations, by fiscal year. To assess the number of servicemembers separated under DOD’s homosexual conduct policy who held skills in intelligence-related critical occupations, we asked the services to analyze the critical occupations held by the servicemembers separated under the homosexual conduct policy and designate the critical occupations that the services deemed intelligence related. To determine the extent to which servicemembers with skills in important foreign languages were separated under DOD’s homosexual conduct policy, we obtained data from DMDC on the foreign language information (i.e., foreign language, proficiency score, date of proficiency certification, and year of separation) of each enlisted servicemember and officer separated under the policy during the period of our review. We interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness and the offices within the services that are responsible for determining foreign language requirements and administering bonus programs. Based on interviews, we and the services determined for the purposes of this report that a language was “important” if a financial incentive was provided under the Foreign Language Proficiency Bonus (FLPB) program. The FLPB provides a monetary incentive for the acquisition, maintenance, and enhancement of foreign language skills at or above proficiency levels required for occupational and functional performance. The FLPB is used to increase strategic language capability by (1) encouraging servicemembers with foreign language proficiency to self-identify and sustain proficiency; (2) providing servicemembers an incentive to acquire foreign language skills, improve foreign language skills, or both; (3) providing servicemembers whose military specialties require a foreign language with an incentive to expand their proficiency to other foreign languages and dialects; and (4) creating a cadre of language professionals operating at the highest levels of proficiency. To ensure that we considered the most comprehensive set of critical languages skills for each service, we also used additional lists that the services utilized to describe these language skills. Specifically, from fiscal year 2004 through fiscal year 2005, each of the services used its own specific list to determine which languages would qualify a servicemember to receive an FLPB. Subsequently, in January of fiscal year 2006, the Defense Language Office published its first annual Strategic Language List (SLL). In the SLL, DOD prioritizes languages for which (1) DOD has current and projected requirements, (2) training and testing will be provided, (3) incentives will be applied, and (4) other resources will be allocated. The SLL does not preclude the services from providing incentives for other languages for which they may have requirements. Therefore, from fiscal year 2006 through fiscal year 2009, each service created its own SLL based on both the DOD-wide SLL and the service’s specific language capabilities and requirements. Since fiscal year 2006, the services have each used their own SLLs to determine the languages for which their servicemembers would receive FLPBs. To assess the number of servicemembers separated under DOD’s homosexual conduct policy who held an important foreign language skill, we identified each servicemember with language skills, determined whether the languages qualified for FLPB in the year of the servicemember’s separation, reviewed the servicemember’s proficiency scores in those languages to determine whether the servicemember met the minimum requirements, and determined whether the servicemember’s annual proficiency certification was within 12 months of separation. To calculate certain costs associated with administering DOD’s homosexual conduct policy, we determined both the cost of recruiting and training through initial occupational training of the replacements of separated servicemembers and the services’ administrative costs incurred when separating servicemembers under the policy. We determined that a replacement cost methodology is the most appropriate approach, and it allows us to produce the most accurate calculation based on the nature of the data provided by the services. The replacement cost methodology allows us to calculate the cost to the services to recruit a new servicemember, provide him or her with basic training, and graduate the servicemember from initial skills training in the occupational specialty in which a servicemember had been separated. Our calculation of replacement costs concludes with the end of initial skills training since, according to each of the military services, this is the point in a servicemember’s career at which he or she is considered minimally qualified to perform required tasks within a separated servicemember’s occupational specialty. To calculate the recruiting and training costs associated with replacing servicemembers separated under DOD’s homosexual conduct policy, we collected recruiting and training cost data from the services. To the extent possible, we used variable costs and excluded fixed costs to calculate the services’ costs to recruit and train replacements. Because the services do not use “fixed costs” and “variable costs” as categories in their recruiting and training budgets, we provided each service with a common set of criteria to define these terms and asked each service to determine the fixed and variable components of its cost data. Each of the services tracks and maintains data in different ways, which in some cases affected their ability to provide us with only variable costs. In regard to recruiting cost data, the Army and Marine Corps were able to provide data that according to officials consist of only variable costs. However, according to Navy officials, the Navy was not able to fully disaggregate fixed and variable costs, and so our Navy recruiting calculations include some fixed costs. The Army was not able to provide data on the cost of officer recruiting in time for the data to be included in our analyses. In regard to training cost data, the Navy and Marine Corps were not able to fully disaggregate fixed and variable costs. The Army and Air Force were able to provide training data, according to officials, that consist of only variable costs. To the extent that any data provided by the services contain fixed costs, this would result in an overestimation of calculated costs. However, we were not able to determine the exact extent of this overestimation. We reviewed the methodology and data used by the services to develop their cost estimate data for recruiting and training, and determined that they were reliable for our purposes of calculating replacement costs. Recruiting costs: To calculate the recruiting costs associated with replacing servicemembers separated under DOD’s homosexual conduct policy, we collected fiscal year data from the Army, Marine Corps, and Navy for the average cost to recruit active duty enlisted servicemembers and officers. We interviewed service officials who are knowledgeable about their services’ recruiting costs and request variable cost data for certain tasks involved in the recruiting of servicemembers. The services’ recruiting costs include, but are not limited to, the costs associated with enlistment bonuses; recruit travel; and recruiting support, such as the processing of a recruit’s paperwork. The Army provided data on the average variable cost to recruit one enlisted servicemember in each fiscal year but did not provide data on officer recruiting in time for the data to be included in our analyses. Marine Corps officials explained that the Marine Corps provided data on the average variable cost to recruit enlisted servicemembers, as well as the average variable cost to recruit officers in each fiscal year. According to Navy officials, the Navy was not able to fully disaggregate fixed and variable costs, and so our Navy recruiting calculations include some fixed costs. We multiplied each of these averages by the number of separated servicemembers for each service to calculate a fiscal year total. Finally, we converted these fiscal year totals to fiscal year 2009 dollars and summed our calculations for each fiscal year within each service. These figures represent the total cost of recruiting replacements for separated servicemembers in the Army, Navy, and Marine Corps. The Air Force provided recruiting costs as part of an overall figure that includes both Using these overall figures, we followed training and recruiting costs. the same approach described above. Training cost: These costs include compensation costs and other costs. Compensation costs: Using service-specific training course lengths and DOD data on military compensation, we calculated the amount of pay and benefits received by replacement servicemembers during training. We interviewed service officials who are knowledgeable about their services’ compensation procedures and requested data on the amounts of pay and benefits received by servicemembers. To calculate the cost of compensation for one enlisted servicemember or officer in the Army, Marine Corps, and Navy, we first multiplied fiscal year weekly compensation data provided by the services by the standard number of weeks spent in each service’s basic training. The Navy provided fiscal year compensation data for the entire length of basic training. For occupational specialty training, we multiplied the weekly compensation rate by the length of initial skills training for each relevant occupation for all three of these services. To address occupations for which data on training length were not available, we used averages for the length of basic and initial skills training for that service’s separated occupations in that fiscal year. Next, we converted all calculations into fiscal year 2009 dollars, and then summed our calculations for each fiscal year within each service. These figures represent the total compensation received during basic training and occupational specialty training for separated servicemembers in each service. The Air Force includes the value of pay and benefits provided to servicemembers in its overall recruiting and training cost estimate. Other training costs: To calculate other training costs associated with replacing servicemembers separated under DOD’s homosexual conduct policy, we collected fiscal year data from the Army, Marine Corps, and Navy for the costs to complete each service’s basic training program and the initial skills training of the specific occupational specialties contained within each service’s group of separated servicemembers. We interviewed service officials who are knowledgeable about their services’ training procedures and requested cost data for the training of servicemembers. The costs associated with basic training and initial skills training include, but are not limited to, clothing and equipment, supplies, student travel, administration of courses of instruction, and overhead associated with training centers. We determined the length of each service’s basic training and asked each service to provide the average variable cost for basic training in the fiscal year a servicemember was separated. We also asked the services to identify the average length of each initial skills course and provide the average variable cost for an individual servicemember to finish the initial skills training for each relevant occupational specialty. According to data provided by the services, the cost and length of training servicemembers in different occupational specialties can vary widely. By using training cost data that are specific to occupational specialties of the separated servicemembers, we produced the most accurate calculation possible, based on available data. To calculate the cost of training, we multiplied the average basic and occupational training costs by the number of servicemembers who held that occupation in the year of their separation. Based on our requests, the services supplied cost estimate data for the cost of basic training and of training for each relevant occupational specialty for which they had data. If there were occupations for which data were missing or unavailable, we calculated an overall average training cost for relevant occupations for the service and the fiscal year in which we were missing data. We then used that average as the training cost for the separated servicemembers, and followed the approach described above. Finally, we converted these fiscal year totals to fiscal year 2009 dollars and summed our calculations for each fiscal year within each service. These figures represent the total cost of training replacements for separated servicemembers in each service. The Air Force provided variable training cost data as part of an overall figure that includes both training and recruiting costs. To calculate the administrative cost of carrying out separations, we asked the services to identify the legal and nonlegal processes associated with the separations process. According to the services, the legal processes may include paralegal work, attorneys’ counseling of servicemembers, and board hearings. According to the services, the nonlegal costs may include commanders’ inquiries, pastoral counseling of servicemembers, and the processing of separation paperwork. To collect information on the types of costs the services incur when separating servicemembers, we interviewed and gathered data from service officials who are knowledgeable about their services’ separations procedures and requested cost data for certain tasks involved in the separation of servicemembers and on the personnel involved in carrying them out. Using these data and military pay rates, we calculated administrative costs. While the Air Force, Army, and Marine Corps provided us with this information, the Navy did not provide data on the legal and nonlegal processes associated with carrying out separations. Navy officials explained that changes in separation processes from fiscal years 2004 through 2009 prevented them from providing data on the personnel involved in carrying out key tasks in time for the data to be included in our analyses. Because the Navy did not provide data on administrative costs, our calculation of these costs is an underestimation of DOD’s likely total administrative costs. For legal and nonlegal administrative costs, we asked the Air Force, Army, and Marine Corps to provide a list of the tasks carried out during separation of a servicemember under DOD’s homosexual conduct policy, identify the positions of officials involved in carrying out these tasks, estimate the average amount of time required for each task, and identify the rank and years of service of the type of official who would typically carry out the task. With this information, we multiplied the time it typically takes to complete a task by the hourly pay rate of the official who typically performs the task, using the salary information from DOD’s pay tables for fiscal year 2009, which are in fiscal year 2009 dollars. We repeated this type of calculation for each task on a service’s list of tasks performed during a separation. Next, we summed the cost of each of these tasks to calculate a service’s total per-case administrative cost of processing this type of separation. Finally, we multiplied this cost by the number of separated servicemembers in each fiscal year to calculate each service’s total administrative cost of separating servicemembers under DOD’s homosexual conduct policy. For legal administrative costs, we calculated these costs for the three different types of homosexual conduct cases that a service processes: board cases, nonboard cases, and unsubstantiated cases. For nonlegal administrative costs, we calculated costs for the three levels of command at which a service typically processes homosexual conduct separations: company or flight, battalion and above or squadron and above, and outside of the separated servicemember’s chain of command. Finally, we summed each of the three service’s costs to calculate per-service totals for legal and nonlegal administrative costs over the 6-year period of our study. The analyses in this report were current as of November 30, 2010. To calculate DOD’s total cost to replace the 3,664 servicemembers separated under DOD’s homosexual conduct policy, we summed the total recruiting and training costs from each service in order to calculate a single, DOD-wide calculation of the cost to recruit and train replacements for the servicemembers separated from fiscal year 2004 through fiscal year 2009. We added this total to the administrative total to determine the overall total cost to DOD of implementing the homosexual conduct policy during this period. We were unable to determine the extent of the overestimation of replacement costs, the underestimation of the administrative costs, or the resulting net impact on our calculation of the overall total cost. We assessed the reliability of all data provided by DOD and the services for each of our objectives by (1) reviewing existing information about the data and the systems that produced them and (2) interviewing agency officials knowledgeable about the data to determine the steps taken to ensure the accuracy and completeness of the data. We assessed the reliability of DMDC’s Active Duty Personnel Transaction Fiscal Year End DADT Files, Active Duty Personnel Master End Strength Fiscal Year End Files and Monthly Files, and Active Duty Language Fiscal Year End Files by (1) performing electronic testing of the required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. In addition, we assessed the reliability of the services’ cost data by (1) reviewing existing information about the data and the systems that produced them and (2) interviewing agency officials knowledgeable about the data. We determined that the data sets were sufficiently reliable for the purposes of presenting separations, personnel information for separated servicemembers, and costs associated with administering the homosexual conduct policy. We conducted this performance audit from January 2010 through January 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As shown in table 13, DOD separated a total of approximately 1.2 million servicemembers for all reasons, including voluntary reasons, from fiscal years 2004 through 2009. Of the approximately 1.2 million servicemembers separated by the services, the services granted “honorable” separations to about 74 percent, “general” separations to about 6 percent, “under other than honorable” separations to about 5 percent, “dishonorable dismissal” separations to less than 1 percent, “bad conduct” separations to about 1 percent, and “uncharacterized” separations to about 10 percent. About 4 percent of the separations were classified “unknown or not applicable.” Tables 14 and 15 show separations for known reasons for enlisted servicemembers and officers, by number of separations and per fiscal year. According to our analysis of DMDC data, 577 Reserve and National Guard servicemembers were separated under the homosexual conduct policy from fiscal years 2004 through 2009. (See table 16.) The Reserve and National Guard separations represent about 14 percent of the total population of active, reserve, and guard servicemembers separated under the homosexual conduct policy. Table 17 lists the most common occupations held by separated servicemembers, by service, from fiscal years 2004 through 2009. Approximately 472 servicemembers (33 percent) separated under the homosexual conduct policy who held skills in critical occupations were separated after 2 years or more of service, as shown in table 18. Approximately 11 servicemembers (48 percent) separated under the homosexual conduct policy who held skills in important foreign languages were separated after 2 years or more of service, as shown in table 19. In addition to the contact named above, key contributors to this report were Elizabeth C. McNally, Assistant Director; Clarine S. Allen; Christina E. Bruff; Grace A. Coleman; K. Nicole Harms; Grant M. Mallie; Charles W. Perdue; Steven R. Putansu; Terry L. Richardson; Amie M. Steele; Christopher W. Turner; Jack B. Wang; Erik S. Wilkins-McKee; and Kimberly Y. Young. Military Training: DOD Needs a Strategic Plan and Better Inventory and Requirements Data to Guide Development of Language Skills and Regional Proficiency. GAO-09-568. Washington, D.C.: June 19, 2009. Defense Management: Preliminary Observations on DOD’s Plans for Developing Language and Cultural Awareness Capabilities. GAO-09-176R. Washington, D.C.: November 25, 2008. Military Personnel: Evaluation Methods Linked to Anticipated Outcomes Needed to Inform Decisions on Army Recruitment Incentives. GAO-08-1037R. Washington, D.C.: September 18, 2008. Military Personnel: Strategic Plan Needed to Address Army’s Emerging Officer Accession and Retention Challenges. GAO-07-224. Washington, D.C.: January 19, 2007. Differing Scope and Methodology in GAO and University of California Reports Account for Variations in Cost Estimates for Homosexual Conduct Policy. GAO-06-909R. Washington, D.C.: July 13, 2006. Military Personnel: Preliminary Observations on Recruiting and Retention Issues within the U.S. Armed Forces. GAO-05-419T. Washington, D.C.: March 16, 2005. Military Personnel: Financial Costs and Loss of Critical Skills Due to DOD’s Homosexual Conduct Policy Cannot Be Completely Estimated. GAO-05-299. Washington, D.C.: February 23, 2005. Military Personnel: Observations Related to Reserve Compensation, Selective Reenlistment Bonuses, and Mail Delivery to Deployed Troops. GAO-04-582T. Washington, D.C.: March 24, 2004. Military Personnel: DOD Needs More Effective Controls to Better Assess the Progress of the Selective Reenlistment Bonus Program. GAO-04-86. Washington, D.C.: November 13, 2003. Military Personnel: Management and Oversight of Selective Reenlistment Bonus Program Needs Improvement. GAO-03-149. Washington, D.C.: November 25, 2002. Foreign Languages: Human Capital Approach Needed to Correct Staffing and Proficiency Shortfalls. GAO-02-375. Washington, D.C.: January 31, 2002. Military Personnel: Perceptions of Retention-Critical Personnel Are Similar to Those of Other Enlisted Personnel. GAO-01-785. Washington, D.C.: June 28, 2001. Military Attrition: Better Data, Coupled with Policy Changes, Could Help the Services Reduce Early Separations. GAO/NSIAD-98-213. Washington, D.C.: September 15, 1998. DOD Training: Many DOD Linguists Do Not Meet Minimum Proficiency Standards. GAO/NSIAD-94-191. Washington, D.C.: July 12, 1994. Homosexuals in the Military: Policies and Practices of Foreign Countries. GAO/NSIAD-93-215. Washington, D.C.: June 25, 1993. Defense Force Management: DOD’s Policy on Homosexuality. GAO/NSIAD-92-98. Washington, D.C.: June 12, 1992. Defense Force Management: Statistics Related to DOD’s Policy on Homosexuality. GAO/NSIAD-92-98S. Washington, D.C.: June 12, 1992.
From fiscal years 1994 through 2009, the Department of Defense (DOD) separated over 13,000 active military servicemembers under its homosexual conduct policy. These separations represent about 0.37 percent of the 3.6 million members separated for all reasons, including expiration of terms of service and retirement. In 2005, GAO reported on the number of separated servicemembers under DOD's homosexual conduct policy who held critical skills and the costs associated with administering the policy from fiscal years 1994 through 2003. GAO was asked to examine data from fiscal years 2004 through 2009 to determine (1) the extent to which the policy has resulted in the separation of servicemembers with skills in critical occupations and important foreign languages and (2) the services' costs for certain activities associated with administering the policy. GAO obtained and analyzed DOD personnel and cost data; examined DOD regulations and policy documents; and conducted interviews with officials from the Office of the Under Secretary of Defense for Personnel and Readiness, the Defense Manpower Data Center, and each of the military services. GAO provided a draft of this report to DOD for review and comment. DOD did not have any comments on the report. According to GAO's analysis of Defense Manpower Data Center data, 3,664 servicemembers were separated under DOD's homosexual conduct policy from fiscal years 2004 through 2009. Of the 3,664 separations, 1,458 of these separated servicemembers held a critical occupation or an important foreign language skill as determined by GAO and the services. More specifically, 1,442 (39 percent) of the servicemembers separated under the policy held critical occupations, such as infantryman and security forces, while 23 (less than 1 percent) of the servicemembers held skills in an important foreign language, such as Arabic or Spanish. Seven separated servicemembers held both a critical occupation and an important foreign language skill. However, the number of separated servicemembers with critical occupations could be an underestimation because of a number of factors. For example, the Air Force provided the occupations eligible for enlistment bonuses from fiscal years 2006 through 2009, but could not provide this information for fiscal years 2004 and 2005 because the Air Force's data were incomplete. Using available DOD cost data, GAO calculated that it cost DOD about $193.3 million ($52,800 per separation) in constant fiscal year 2009 dollars to separate and replace the 3,664 servicemembers separated under the homosexual conduct policy. This $193.3 million comprises $185.6 million in replacement costs and $7.7 million in administrative costs. The cost to recruit and train replacements amounted to about $185.6 million. In calculating these costs, GAO included variable costs, such as recruiting bonuses, and excluded fixed costs, such as salaries and buildings, to the extent possible because according to service officials there would likely be no significant increase in fixed costs when recruiting and training a relatively small number of replacement personnel. Each of the services tracks and maintains data in different ways, which in some cases affected their ability to provide GAO with only variable costs. For example, while the Army and Air Force could disaggregate variable and fixed recruiting and training costs, the Navy could not disaggregate variable and fixed recruiting and training costs, and the Marine Corps could not disaggregate variable and fixed training costs. To the extent that recruiting and training cost data provided by the services contain fixed costs, this is an overestimation of replacement costs. Administrative costs amounted to about $7.7 million and include costs associated with certain legal activities, such as board hearings, and nonlegal activities, such as processing separation paperwork. The Air Force, Army, and Marine Corps provided GAO with administrative cost estimates; however, Navy officials explained that changes in separation processes from fiscal years 2004 through 2009 prevented them from providing an accurate administrative cost estimate in time for the data to be included in GAO's analyses. Because the Navy did not provide these data, GAO's calculation is an underestimation of DOD's likely total administrative costs. Because of data limitations, GAO was unable to determine the extent of the overestimation of the replacement costs, the underestimation of the administrative costs, or the resulting net impact on GAO's total calculations.
Helium is an inert element that occurs naturally in gaseous form and has a variety of uses (see table 1). Helium’s many uses arise from its unique physical and chemical characteristics. For example, helium has the lowest melting and boiling point of any element and as the second lightest element, gaseous helium is much lighter than air. Certain natural gas fields contain a relatively large amount of naturally occurring helium, which can be recovered as a secondary product. The helium is separated from the natural gas and stored in a concentrated form that is referred to as crude helium because it has yet to go through the final refining process. The federal government has been extensively involved in the production, storage, and use of helium since the early part of the 20th Century. The federal government and private sector cooperatively produced helium before 1925, specifically for military uses. The Helium Act of 1925, as amended, assigned responsibility for producing helium for federal users to the Department of the Interior’s Bureau of Mines. The act provided that funds from helium sales be used to finance the program. From 1937 until 1960, the Bureau of Mines was the sole producer of helium. The 1925 act, as amended, also established a revolving fund known as the helium production fund for the program. Such revolving funds are used to finance a continuing cycle of government-owned business-type operations in which outlays generate receipts that are available for continuing operations. In the federal budget, this fund is referred to as the Helium Fund and it is used to account for the program’s revenues and expenses. The Helium Act Amendments of 1960 stipulated that the price of federal helium cover all of the helium program’s costs, including interest on the program’s debt. The 1960 act required the Secretary of the Interior to determine a value for net capital and retained earnings and establish this value as debt in the Helium Fund, and to add subsequent program borrowings to that debt. The program’s borrowings were authorized by subsequent appropriations acts and recorded as outlays in the federal budget in the years in which they were expended. In addition, the interest was added to the debt in the Helium Fund. However, the interest is simply a paper transaction, not a government outlay. The Bureau of Mines determined that the value of the program’s net capital and retained earnings was about $40 million in 1960. Subsequent borrowings from the U.S. Treasury totaling about $252 million were used to purchase helium for storage. By September 30, 1991, the debt had grown to about $1.3 billion, of which more than $1 billion consisted of interest because the interest accrued faster than the program could repay the debt. The government’s reserve of crude helium is stored in the ground in an area of a natural gas field that has a naturally occurring underground structural dome near Amarillo, Texas. The purity of the stored crude helium diminishes (degrades) over time as it mixes with the natural gas that is present in the storage area. Moreover, when extracted at an excessive rate, the degradation is accelerated because the natural gas surrounding the helium is pulled toward the extraction wells faster than the helium. This causes the helium to mix with the natural gas more rapidly. As a result, larger volumes of the mixture of natural gas and helium must be extracted to obtain the needed helium. In addition to the government’s reserve of crude helium, private companies that are connected to BLM’s pipeline and pay a storage fee are also able to store and retrieve their own private crude helium reserves from the same storage area. As directed by the Congress, the National Academies’ National Research Council reviewed the helium program and released a report in 2000 that evaluated changes made in the program, effects of these changes on the program, and several scenarios for managing the federal helium reserve in the future. Because of subsequent changes in price and availability of helium, in 2008, the National Research Council convened a committee to determine if the current implementation of the helium program was having an adverse effect on U.S. scientific, technical, biomedical, and national security users of helium. The committee reported on these effects in early 2010 and concluded that the current implementation of the program has adversely affected critical users of helium and was not in the best interest of the U.S. taxpayers or the country. Our November 1991 and October 1992 reports included findings and recommendations on the helium program’s debt, the pricing of crude helium, the purity of helium in storage, and three alternatives for meeting federal needs for helium. In October 1992, we reported that the Helium Fund debt had grown to about $1.3 billion, as of September 30, 1991. Section 6(c) of the Helium Act Amendments of 1960 stipulated that (1) the price of federal helium should cover all of the helium program’s costs, including interest on the program’s debt; and (2) the debt should be repaid within 25 years, unless the Secretary of the Interior determines that the deadline should be extended by not more than 10 years. With the 10 year extension, the deadline for paying off the debt and accumulated interest was September 13, 1995. In 1992, we estimated that, in order for the Bureau of Mines to repay the debt by the 1995 deadline, it would have to charge federal agencies with major requirements for helium over $3,000 per thousand cubic feet, compared with the 1992 price of $55. These agencies, which were required under section 6(a) of the 1960 act to purchase helium from the Bureau of Mines, would have had no choice but to pay a higher price for helium. We concluded that this would have no net effect on the overall federal budget if those agencies received additional appropriations to pay for helium at a higher price because the appropriations would offset the increased revenues to the helium program. Because conditions affecting the Bureau of Mines’ helium program had changed since the Helium Act Amendments of 1960, one of the recommendations in our October 1992 report was that the Congress should consider canceling the debt in the Helium Fund. This is because we concluded at the time that it was no longer realistic to expect the agency to repay the debt by the statutory deadline of 1995, and canceling the debt would not adversely affect the federal budget as the debt consisted of outlays that had already been appropriated and interest that was a paper transaction. We reported that canceling the Helium Fund debt, however, would likely allow the Bureau of Mines to undercut private industry’s refined helium prices, thus adversely affecting the private helium-refining industry. The Helium Act Amendments of 1960 also were intended to foster and encourage a private helium industry. In our October 1992 report, we found that the helium price set by the Bureau of Mines had an effect on the growth of the private helium industry. After the 1960 act was passed, the Bureau of Mines’ refined helium price for federal users rose from $15.50 per thousand cubic feet to $35 in 1961 to cover the anticipated costs of conserving helium, which principally included purchasing helium for storage. This 126-percent increase in the federal refined helium price caused the private industry to believe that it could economically produce and sell refined helium. While private-sector prices fluctuated from a low of $21 in 1970, they gradually increased to $37.50 by 1983, which matched the Bureau of Mines’ 1982 price. Over this period, the Bureau of Mines’ price for helium continued to be higher than or equal to the private-sector price, and from 1983 to 1991 it appeared to act as a ceiling for private- sector prices. In 1991, the federal price increased to $55, and private-sector prices gradually increased to about $45. These price trends led us to conclude in 1992 that once a private helium refining industry had developed, it was able to successfully compete with the Bureau of Mines’ program. However, in our October 1992 report, we also noted that if the Congress decided to cancel the Helium Fund debt then this would affect how the Bureau of Mines sets its helium prices and would likely allow it to undercut private-sector prices. Therefore, we noted that if the Congress decided that fostering the private helium industry was still an objective of the Helium Program then additional actions would be needed. One alternative we identified was to require the Bureau of Mines to price its helium comparably to private-sector prices by ascertaining private-sector prices and using a comparable price or by setting a price that covered the Bureau of Mines’ capital costs, operating expenses, estimated costs of a normal level of inventory, and an industry-like rate of return on its investment. A second alternative was to eliminate competition by requiring that all federal needs be met by the Bureau of Mines but prohibiting the federal helium program from selling helium to nonfederal customers. In our November 1991 report on helium purity, we found that the Bureau of Mines was not restricting the rate at which helium was being extracted from the helium reserve, causing the purity of the crude helium to degrade faster than would otherwise occur. We noted that because of this accelerated degradation, the Bureau of Mines was incurring additional costs to extract and refine federal helium. While some mixing with natural gas is inevitable, according to a study by the Bureau of Mines in 1989, the mixing should be minimized so that the crude helium’s purity can be maintained at as high a level as possible in order to avoid higher future costs of extracting and refining federal helium. In our 1991 report, we reported that, according to Bureau of Mines’ engineers, the accelerated degradation could be avoided by restricting total extractions to 3 million cubic feet of helium per day. At the Bureau of Mines’ request, an outside petroleum engineering consulting firm reviewed the Bureau of Mines’ engineering, geologic, and other studies and agreed that an extraction rate restriction of 3 million cubic feet per day was needed to protect the purity of the stored crude helium. In 1989, the Bureau of Mines decided to restrict total daily extractions to 3 million cubic feet but later rescinded that restriction after an industry association expressed concern to the Director of the Bureau of Mines that the restriction might adversely affect private companies’ ability to obtain crude helium to meet their needs. At the time of our 1991 review, the Director told us that he had not reviewed the Bureau of Mines’ study when making the decision to rescind the restriction and Bureau of Mines’ engineers estimated that if the helium continued to be degraded at the rate it was being degraded at that time, the Bureau of Mines would incur additional costs of as much as $23.3 million in 1991 dollars to extract and refine federal helium from the helium reserve through the year 2050. In 1991, we recommended that the Bureau of Mines determine if setting an acceptable extraction rate was warranted and, if so, to specify that rate. In addition, we noted that if an extraction rate was specified, the Bureau of Mines should either restrict private company extractions or impose a charge on private companies that store helium in the helium reserve when their extractions exceed the established acceptable rate. In our October 1992 report, we evaluated three alternatives for meeting federal needs for helium: (1) continue the Bureau of Mines’ existing program, (2) require that all federal needs be supplied by private industry, and (3) allow all federal agencies to choose to purchase helium from the Bureau of Mines or private industry. These three alternatives had the potential to affect the objectives of the Helium Act Amendments of 1960, the program’s debt, the federal budget, and the total cost of supplying helium to the U.S. economy differently. For example, in 1992, we reported that the growth of a private industry capable of meeting federal needs created a competitive market where the federal helium prices directly affected the private industry. In this environment, if the Bureau of Mines priced helium to repay the Helium Fund debt by 1995, it would need to charge an extremely high price, which would likely drive the Bureau of Mines out of the helium business. On the other hand, if the debt had been repaid or cancelled, the federal price likely would be lower than private prices, which could have an adverse effect on the private helium refining industry. We concluded that the choice among these and other possible alternatives was ultimately a public policy decision that should consider many issues. We recommended that the Congress reassess the act’s objectives in order to decide how to meet current and foreseeable federal needs for helium. Since our reports in the early 1990s, two key developments—the Helium Privatization Act of 1996 and the construction of the Cliffside Helium Enrichment Unit in 2003—have caused considerable changes to the federal helium program. These two developments addressed or altered the areas that we had raised concerns about in the early 1990s. Specifically, the Helium Privatization Act of 1996 affected helium debt and pricing, and it reset the program’s objectives. The Cliffside Helium Enrichment Unit addressed the issue of helium purity in storage. After our reports in the early 1990s, the Congress passed the Helium Privatization Act of 1996, which significantly changed the objectives and functions of Interior’s helium program. For example, the 1996 act made the following key changes: Interior was required to close all government-owned refined helium production facilities and to terminate the marketing of refined helium within 18 months of enactment (50 U.S.C. § 167b(b)); the helium program’s debt was frozen as of October 1, 1995 (50 U.S.C. § 167d(c)); Interior was required to offer for sale all but 600 million cubic feet of the crude helium in storage on a straight-line basis—a depreciation method that spreads out the cost of an asset equally over its lifetime—by January 1, 2015 (50 U.S.C. § 167f(a)(1)); Interior was required to set sale prices to cover the crude helium reserve’s operating costs and to produce an amount sufficient to reimburse the federal government for the amounts it had expended to purchase the stored helium. The price at which Interior sells crude helium was required to be equal to or greater than a formula that incorporates the amount of debt to be repaid divided by the volume of crude helium remaining in storage, with a Consumer Price Index adjustment (50 U.S.C. §§ 167d(c), 167f(a)(3)). Furthermore, when the debt is fully paid off, the revolving Helium Fund shall be terminated (50 U.S.C. § 167d(e)(2)(B)); Interior should maintain its role in the helium storage business (50 U.S.C. § 167b(a)); and established a modified “in-kind” program to meet federal needs for helium. Rather than purchasing refined helium directly from Interior, federal agencies were required to purchase their major helium requirements from persons who have entered into enforceable contracts to purchase an equivalent amount of crude helium from Interior (50 U.S.C. § 167d(a)). These changes affected the federal helium program in various ways. For example, because the 1996 act effectively froze the debt at $1.37 billion and interest no longer accrued, BLM has been able to pay off a large portion of its debt. As of the end of fiscal year 2010, BLM expects to have paid off 64 percent of the debt; it expects to pay off the entire debt around 2015 (see fig. 1). In addition, since the 1996 act required a specific method for pricing crude helium, the initial minimum BLM selling price for crude helium after the act was passed was almost double the price for private crude helium at that time. However, after BLM started to sell its crude helium according to the method specified in the act, the market price for crude and refined helium began to change. According to the National Research Council, the private sector began using the BLM crude price as a benchmark for establishing its price, and, as a result, privately sourced crude helium prices increased and now they meet or exceed BLM’s price. Increases in the price of crude helium have also led to increases in the price of refined helium (see fig. 2). Refined helium prices have more than doubled from 2002 through 2008 pursuant to demand trends. One of the factors for recent price increases was a disruption in helium supply from plants closing because of weather-related issues. Prices increased around 2007 due to the decline in production capacity. As part of the resetting of the helium program’s objectives, the 1996 act established a revised approach for meeting federal needs for helium. In 1998, BLM began engaging in in-kind sales to federal agencies. The in-kind regulations established procedures for BLM to sell crude helium to authorized helium supply companies and required federal agency buyers to purchase helium from these approved suppliers. Since the in-kind program started, the sales to federal agencies have fluctuated, primarily due to the National Aeronautics and Space Administration’s (NASA) unique requirement for large volumes of helium on a sporadic basis. Total federal in-kind sales for fiscal year 2009 were 175.67 million cubic feet cubic feet (see fig. 3). (see fig. 3). Since the act was passed, demand for helium has changed over time (see fig. 4). Total domestic demand has generally decreased since 2001. The vast majority of domestic sales are made to private industries, with federal agencies making up about 10 percent of the sales. On the other hand, total foreign demand has consistently increased, and the amount of helium exported was approximately equal to the amount of helium removed from storage each year from 2000 to 2007. In 2008, the amount of helium exported exceeded the amount of helium removed from storage. The second key development, which has affected the helium purity issue that we reported on in the early 1990s, is the construction and operation of the Cliffside Helium Enrichment Unit. In response to degrading helium supplies, in 2003, Cliffside Refiners Limited Partnership—a consortium of private-sector refiners—designed and constructed an enrichment unit to produce crude helium of sufficient concentration and pressure for further refining. According to BLM officials, the total cost of building the enrichment unit was approximately $22 million and was paid for by the Cliffside Refiners Limited Partnership. BLM, in partnership with the Cliffside Refiners Limited Partnership, operates the unit. At full capacity, the enrichment unit supplies more than 6 million cubic feet per day or 2.1 billion cubic feet per year of crude helium. The crude helium that is produced from this process is either sold or retained in storage, depending upon demand. As part of the operation, pipeline-quality residual natural gas is also made available for sale. In addition to the proceeds from the helium sales, BLM uses proceeds from the natural gas sales to fund the Cliffside helium operations and the remaining revenues are returned to the U.S. Treasury. According to BLM officials, the enrichment unit has allowed BLM to better manage the drawdown and purity of the helium in storage because it is able to control the wells and the helium content of the feed. Without the enrichment unit, BLM would have to produce from high helium wells first to meet purity requirements and that would have a detrimental effect on the purity of later production, according to these officials. Changes in helium prices, production, and demand have generated concerns about the future availability of helium for the federal government and other critical purposes. The Helium Privatization Act of 1996 does not provide a specific direction for the helium program past 2015—less than 5 years away. As a result of these factors, there is uncertainty about the program’s direction after 2015. Specifically: How should the helium remaining in storage after 2015 be used? The Helium Privatization Act of 1996 required BLM to offer for sale substantially all of the helium in storage by January 1, 2015. While the required amounts have been offered for sale, only 68 percent of the amounts offered for sale have actually been sold (see table 2). If the past sales trends continue, BLM will still have significantly more crude helium in storage than the 600 million cubic feet target established in the 1996 act. In addition, the demand for helium has changed over time, with foreign demand outpacing domestic demand. According to the recent report by the National Academies’ National Research Council, the United States could become a net importer of helium within the next 10 to 15 years, and the principal new sources of helium will be in the Middle East and Russia. Given these circumstances, the National Academies’ report recommended that the Congress may want to reevaluate how the domestic crude helium reserve is used or conserved. It is uncertain at this point how the helium in storage after 2015 will be used. How will the helium program be funded after 2015? Regardless of whether BLM is directed to continue selling off the crude helium in storage after 2015 or conserve it, there will almost certainly continue to be some form of a helium program after 2015. However, if the helium debt is paid off in 2015 as currently projected and the revolving helium fund is terminated, it is not clear how the operations of the helium program will be paid for. Currently the helium program does not receive any appropriated funds for its operations. The revenues generated by the program go into the Helium Fund and the program has access to those funds to pay for its day-to-day operations. It is uncertain at this point how the helium program’s operations will be funded after 2015. At what price should BLM sell its crude helium? Since the Helium Privatization Act of 1996 was passed, BLM has set the price for federal crude helium at the minimum price required by the act. However, because federal crude helium reserves provide a major supply of crude helium, we expect BLM’s prices will continue to affect private industry market prices for crude and refined helium. In addition, in recent years, the helium market has been influenced by other market forces as well as supply disruptions that have resulted in price increases. For example, in 2006, failure of a major crude helium enrichment unit process vessel led to unscheduled outages and eventually to a major plant shutdown. When BLM first set its price after the 1996 act, its price was estimated to be significantly higher than the market price, but now the reverse is true— BLM’s price is estimated to be at or below the market price. On one hand, BLM could consider raising its price to ensure that the federal government is getting a fair market return on the sales of its assets. On the other hand, raising the price could potentially further erode sales. Furthermore, the 1996 act, like the Helium Act Amendments of 1960 before it, tied the price to the program’s operating expenses and debt. If the debt is paid off in 2015 as projected, the debt will no longer be a factor in setting helium prices. BLM officials told us that the 1996 act sets a minimum selling price and that the Secretary of the Interior has the discretion to set a higher price. BLM is planning to reevaluate its selling price, according to agency officials. As a result, it is uncertain how BLM will price its crude helium in the future. In conclusion, Mr. Chairman, there have been a number of changes in the market for helium since the Congress passed the Helium Privatization Act of 1996. As the end point for the actions that were required to be taken under the act come upon us in the next 5 years, the Congress may need to address some unresolved issues such as how to use the remaining helium in storage, how the helium program will operate once the Helium Fund expires in 2015, and how to set the price for the helium owned by the federal government. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact Anu K. Mittal at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Jeffery D. Malcolm and Barbara Patterson, Assistant Directors; Carol Bray; Meredith Graves; and Caryn Kuebler. Also contributing to this testimony were Michele Fejfar, Jonathan Kucskar, and Jeremy Sebest. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government has been extensively involved in the production, storage, and use of helium since the early part of the 20th Century. The federal helium program is currently managed by the Department of the Interior's Bureau of Land Management (BLM). During the 1960s and early 1970s, Interior purchased about 34 billion cubic feet of crude helium for conservation purposes and to meet federal helium needs, such as for the space program and scientific research. Crude helium is a gas of 50 to 85 percent helium. While some of the helium was used to meet federal needs, most of it was retained in storage. The funds used to purchase the helium became a debt owed by the program. GAO reported on the management of the helium program in the 1990s (GAO/RCED-92-44 and GAO/RCED-93-1). Since GAO's reviews of the program in the 1990s, key changes have affected the federal helium program and a recent report by the National Academy of Sciences concluded that it is time to reassess the program. This testimony discusses (1) GAO's findings and recommendations in the early 1990s, (2) key changes that have occurred since the early 1990s, and (3) some of the issues facing the helium program in the near future. To address these issues, GAO reviewed prior reports, applicable laws and regulations, National Academy of Sciences' reports, and BLM data. GAO is not making any new recommendations. In 1991 and 1992, GAO reported on various aspects of the federal helium program including the helium debt, pricing, purity, and alternatives for meeting federal helium needs, and made recommendations to the Congress. For example, in 1992 GAO recommended that the Congress cancel the helium program's debt. As of September 1991, the debt had grown to about $1.3 billion, over $1 billion of which was interest that had accrued on the original debt principle of about $290 million. The debt was also a factor in setting the price of federal helium because the Helium Act Amendments of 1960 stipulated that the price of federal helium cover all program costs, including interest on the debt. In addition, in 1991, GAO recommended that Interior take action to preserve the purity of the helium in storage. GAO found that the unrestricted extraction of helium from the reserve was causing the purity of the crude helium to degrade faster than would otherwise occur, which in turn had increased the program's operating costs. In 1992, GAO also recommended that the Congress reassess the conservation objectives of the helium program and consider other alternatives to meet federal helium needs. Since GAO's reports in the early 1990s, two key developments--the Helium Privatization Act of 1996 and the construction of the Cliffside Helium Enrichment Unit in 2003--have caused considerable changes to the helium program and addressed or altered GAO's prior concerns. Specifically, the 1996 act froze the program's debt and as a result over half the debt has been paid off and the remainder should be paid off by 2015. The 1996 act also required a specific method for pricing helium. This along with other changes in the supply and demand for helium, has resulted in BLM's price to be at or below the market price. Lastly, in resetting the program's objectives, the act directed Interior to stop refining helium and it established a modified in-kind approach for meeting federal helium needs. Agencies must purchase helium from refiners who then purchase an equivalent amount of crude helium from BLM. The Cliffside Helium Enrichment Unit has addressed concerns about helium purity by enriching the crude helium through extracting excess natural gas. Changes in the helium market have generated concerns about the future availability of helium for federal and other needs. The 1996 act did not provide a specific direction for the federal helium program past 2015. Some of the uncertainties facing the program include: (1) How should the helium owned by the federal government be used? BLM's effort to sell off the helium in storage is going slowly and will not be completed by 2015; and some believe that the United States could become a net importer of helium within the next 10 to 15 years. (2) How will the helium program be funded after 2015? If the helium program's debt is paid off by 2015, the revolving Helium Fund that is used to pay for the program's day-to-day operations will be terminated. (3) At what price should BLM sell its helium? In the past, the debt has been a factor in the price and the price has been above the market price. After 2015 the debt will be paid off and the current price is at or below market.
DOD has long been concerned about the quality of its nearly billion and half dollar annual program to transport, store, and manage the household goods and unaccompanied baggage of its servicemembers and employees with permanent change of station and other type orders. Some of the concerns related to poor service from its movers, excessive incidence of loss or damage to service members’ property, and high claims costs to the government. All these problems contributed to a poor quality of service for persons using the system. Consequently, DOD proposed reengineering the personal property program as a quality-of-life initiative. Its primary goals were to substantially improve and put on par with corporate customer standards, the quality its military personnel and their families received from DOD’s contracted movers; simplify the total process—from arranging the moves to settling the claims; and base the program on business processes characteristic of world-class customers and suppliers. Generally, DOD must acquire the goods and services it needs through the competitive acquisition system consisting of the statutes in chapter 137 of title 10 of the United States Code and the primary implementing regulations contained in the Federal Acquisition Regulation (FAR). However, pursuant to 49 U.S.C. 13712, the acquisition of transportation services of a common carrier through the use of a government bill of lading is not subject to the acquisition laws. Instead, these services have been acquired based upon published rates in accordance with procedures contained in DOD transportation regulations. A key feature of MTMC’s proposal to reengineer the personal property program is to simplify the process of acquiring transportation services and to bring it in line with the government’s acquisition of most other services by using multiple award, fixed-price, indefinite delivery/indefinite quantity-type contracts awarded under the competitive acquisition system. MTMC’s proposed contracts would cover statewide services and provide for a base and several option years. The solicitations for the contracts would be open to all responsible offerors, including carriers, forwarders, and relocation companies. Awardees would be selected in accordance with solicitation evaluation factors, which will include such elements as technical or operational requirements, past performance, subcontract plan, and price. To achieve these goals and to comply with congressional direction, MTMC is proposing to begin a pilot test. The plan is to begin the test in early 1997 and run it for at least a year. Fifty percent of the DOD household goods and unaccompanied baggage moving from the test area—North Carolina, South Carolina, and Florida—to all other states, except Alaska and Hawaii, and to Europe, would be included in the test. The other 50 percent would continue moving in the existing program. Industry objected to MTMC’s proposal, particularly because of what it perceived as the negative impact that MTMC’s proposal would have on small business moving companies. It offered for consideration an alternative plan having two distinct programs, one for handling domestic shipments and another for handling international shipments. The industry proposal would not be based on the competitive acquisition system but would use a government bill of lading to acquire the services in accordance with procedures contained in DOD and the General Services Administration transportation regulations. As a result of the initial joint DOD/industry working group session, DOD and industry agreed to the following goals for the reengineered personal property program. These were to 1. Provide quality service 2. Improve on-time pickup 3. Improve on-time delivery 4. Achieve high customer satisfaction in relationship to the entire move process 5. Adopt corporate business processes that lead to world-class customer service 6. Lower loss/damage and lower claims frequency and claims averages 7. Simplify the system, including reducing administrative workload 8. Ensure capacity to meet DOD’s needs for quality moves 9. Provide opportunity for small businesses offering quality service to compete for DOD business as a prime contractor and 10. Provide best value moving services to the government. Our assessment of the extent to which each proposal met the goals was necessarily limited by the lack of precise definitions of each goal and the way to achieve it. Moreover, the proposals were written in such a way that did not specifically address how each would achieve the stated goals. We necessarily had to interpret the goals based on our observations and review of available material and assess each proposal’s ability to meet those goals using our knowledge of the existing personal property program, our understanding of the proposals, associated documents, attendance at all of the working group meetings, a review of the transcripts of the meetings, and our prior studies. DOD’s nearly billion and a half dollar annual personal property program—household goods and unaccompanied baggage—is run centrally by the headquarters office of MTMC but administered locally by about 200 military and DOD transportation offices around the world. DOD relies almost exclusively on commercial movers, both directly with more than 1,100 moving van companies (carriers) and forwarders and indirectly with thousands more agents and owner-operator truckers working for the carriers and forwarders. The program consists of three major processes: carrier/forwarder approval, rate solicitation, and traffic distribution. To participate in the program, a carrier or forwarder must first be approved by MTMC. This requires proof, or certification, that the carrier or forwarder has the requisite state or federal transportation operating authority and agrees to abide by the terms and conditions in MTMC’s tender of service. The carrier or forwarder must also be approved by the local military and DOD transportation office at which the company is planning to serve. This requires proof, or certification, in the form of a letter of intent that the company has local agents ready and able to meet the local installation’s needs. MTMC solicits rates every 6 months. Each carrier and forwarder must file rates individually for the particular traffic channel it intends to serve. For the domestic part of the household goods program, rates are submitted as a percentage discount, or premium of a baseline schedule of rates by origin installation and destination state channel. For the international part of the program, rates are filed as a fixed dollar and cents per hundredweight basis by state and overseas area, or other subdivision channel. Carriers and forwarders have two chances to file rates before the beginning of each rate cycle—an initial rate filing and a “me-too” rate filing in which a carrier or forwarder can lower its initially filed rate to that of any other carrier or forwarder. Rates cannot be changed during the 6-month rate cycle, except for special cause, but they can be canceled at various times during the rate cycle. Each local military installation must distribute its traffic using a traffic distribution roster. Carriers and forwarders are placed on the rosters for each channel by order of rate level and quality score. In the domestic program, traffic is distributed on the basis of low-to-high rate, with highest quality scored carriers given the first 20,000 pounds. In the international program, the forwarder or forwarders initially offering the low rate for the particular channel are given a pre-specified percentage of the traffic on that channel. Overall, MTMC’s proposal would result in a program that would operate in much the same way other DOD programs operate for acquiring goods and services. Emphasis is placed on assessing quality of service in the contractor selection process and obtaining military member satisfaction with the services received. At the first meeting of the DOD/industry working group, MTMC provided a briefing on its proposal. Following that meeting, on June 24, 1996, MTMC provided the working group with a draft request for proposals summary. The summary described a standardized program for handling both domestic and international shipments. It laid out MTMC’s proposed acquisition strategy and the major events that MTMC expected to occur in the proposed acquisition process. The MTMC proposal is the result of the DOD/industry working group process and includes a number of features put forward by industry. Under its plan, MTMC would make major changes to the existing carrier/forwarder approval, rate solicitation, and traffic distribution processes. The existing approval process would be eliminated and replaced by a contract award process. Prices would be fixed for 1 year, with no provision for increases during the contract period. Rate solicitation would be based on competitive acquisition procedures used by government in procuring other types of goods and services and eliminate the twice yearly re-solicitation of rates under the current system. Traffic distribution would be limited to the number of contractors receiving awards. Key points in MTMC’s acquisition strategy were that offerors would be required to submit proposals addressing technical factors (i.e., how the offeror proposed to perform specified technical or operational requirements), past performance, subcontracting plan, and price. MTMC said that it anticipated that price would be less important than the other factors combined. Award would be made only to responsible offerors whose offers conformed to the solicitation and represented the best overall value to the government—price and other evaluation criteria considered. There were no restrictions on the type of company that could compete for the contracts. Therefore, companies other than licensed carriers and forwarders—the only type of companies now allowed to compete for DOD traffic—would be allowed to make an offer for the DOD business. MTMC’s proposal also detailed DOD’s movement and storage requirements, shipment origins (all areas of North Carolina, South Carolina, and Florida) and destinations (13 regions in the contiguous 48 states and 5 regions in Europe), categories of shipments (household goods and unaccompanied baggage) that would and would not be handled under the pilot test, minimum contractor personnel requirements, specific tasks that were to be performed, length of the contract (1 year plus an unspecified number of option years), the way offerors should specify price for each traffic channel (expressed as a discount percentage using the commercial rate tariff for domestic shipments and a fixed dollar and cents per hundredweight rate for international shipments), the accessorial services DOD would be requiring, contractor’s liability and loss and damage claims procedures (full value protection based on certain minimum declared valuations subject to an overall cap), contractor’s required quality assurance procedures (use of a customer survey), certain performance standards for shipment pickup and delivery, and the invoicing and payment process. MTMC also indicated that it would establish and specify in the solicitation a total contract minimum guaranteed tonnage amount from each origin pilot state to each destination region included in the pilot program. It will request that offerors furnish by traffic channel a maximum daily capacity that they are willing to commit to the contract, stated in pounds, from each installation in the pilot test to any or all destination pilot regions that they may wish to serve. MTMC’s proposal, as amended, was endorsed and supported by one of the five industry associations attending the meetings—the Military Mobility Coalition, an industry group with members from relocation companies; move management companies; independent and van line-affiliated carriers and forwarders; and industry specialty firms, such as cargo insurance companies. The household goods carrier/forwarder industry associations prepared and submitted for comment an alternative plan (referred to in this report as the industry proposal) on June 24, 1996. Industry restated its proposal on October 25, 1996, in a letter to us. The industry proposal also represents the results of the DOD/industry working group process and includes certain features favored by DOD. The summary described a plan that consists of two distinct programs, one for handling domestic shipments and another for handling international shipments. Under its plan, industry would build on the existing DOD program. It would not be based on the government competitive acquisition system but would use a government bill of lading to acquire the services in accordance with procedures contained in DOD and the General Services Administration transportation regulations. The industry proposal would limit the type of company that could participate to only those types—licensed carriers and forwarders—currently in the program. Industry’s proposal for handling both domestic and international shipments is like MTMC’s proposal to the extent that it would be based on the same pricing system for each traffic channel (expressed as a discount percentage using the commercial rate tariff for domestic shipments and a fixed dollar and cents per hundredweight rate for international shipments), provide for the same level of contractor liability (full value protection based on certain minimum declared valuations subject to an overall cap), provide for certain performance standards for shipment pickup and delivery, and provide for the use of a customer survey. Industry’s proposal differed in (1) who could participate in the DOD program (only licensed carriers and forwarders), (2) lengthened the rate cycle period from a current 6-month cycle to a yearly cycle, (3) indicated that rates could be adjusted at stated times during the rate cycle to account for underlying cost increases, and (4) explained how traffic would be distributed among firms using a combination of price and customer survey feedback data. Its proposal also indicated that carriers and forwarders in the domestic program could submit a “best and final” rate 3 months into the rate cycle to improve their competitive position and that forwarders in the international program could lower their originally filed rates 60 days prior to the start of the rate cycle. Accompanying the proposal was some discussion on how the industry would provide for program simplification and eliminate “paper companies.” That is, paper companies are companies in the domestic program that lack actual operating assets but are affiliates of companies that have assets. These paper companies do not increase DOD’s capacity. The industry proposal was signed by the presidents of the four carrier associations—American Movers Conference, the Household Goods Forwarders Association of America, the National Moving and Storage Association, and the Independent Movers Conference. The associations’ members represented virtually every facet of the moving industry, including van lines with agent networks, independent carriers, agents, and forwarders. As previously mentioned, the Military Mobility Coalition supported the MTMC proposal. The DOD/industry working group did not define the individual elements that made up each of the agreed-to 10 goals for reengineering DOD’s personal property program. The goals are qualitative and not easily measured. Nor were the proposals written in such a way that specifically addressed how DOD and industry would meet each goal. Consequently, our assessment of the extent to which each proposal met the goals was necessarily limited by the lack of precise definitions of each goal and the way to achieve it. Every goal was debated at length by DOD and industry officials without complete agreement. For example, there were varying interpretations of the goals to improve quality service and to achieve best value. We assessed each proposal’s ability to meet those goals using our knowledge of the existing personal property program, our understanding of the proposals, associated documents, information gathered from our attendance at all of the working group meetings, review of the transcripts of the meetings, and our prior studies. In table 1, we list the goals and provide a general comment about the extent to which each proposal is likely to meet the goals. Following the table, we then discuss the goals and the basis for our assessment of the extent to which the proposals are likely to meet each goal. In discussing the industry proposal, our comments are directed at both the domestic and international programs, unless otherwise noted. Both proposals are likely to equally achieve 4 of the 10 goals of the program. These include the goals for improving on-time shipment pick-up (goal 2), improving on-time shipment delivery (goal 3), achieving high customer satisfaction (goal 4), and reducing claims and improving claims handling (goal 6). Both MTMC and industry agreed on the need for performance standards to achieve the above goals. For example, to achieve high customer satisfaction, each proposal provides for more direct communication between servicemember and contractor (matters such as the pre-move survey, movement counseling, phone numbers to check with the contractors, and intransit visibility) and use of a customer survey as a tool for obtaining feedback on contractor performance. Included would be such questions as the timeliness of pickup, timeliness of delivery, loss and damage occurrence, evaluation of origin and destination agent service, and the customer’s decision on whether to use the particular contractor again. To reduce claims and the problems associated with them, each proposal provides for increased contractor liability (full value protection) and more streamlined claims settlement, including direct settlement (servicemember with contractor). MTMC’s proposal meets 5 of the 10 goals for reengineering the personal property program to a greater extent than the industry plan. The goals are providing quality service (goal 1), providing best value (goal 10), simplifying the system (goal 7), adopting corporate business practices (goal 5), and ensuring capacity to meet DOD’s needs (goal 8). MTMC has said it wants its reengineering effort to produce a dramatic improvement in the quality of personal property shipment and storage services provided to military servicemembers or civilian employees and their families when they are relocating on U.S. government orders. This means providing a service to DOD personnel on par with corporate customer standards. MTMC’s proposal would fundamentally change the existing system by using multiple award, fixed-price, indefinite delivery/indefinite quantity-type contracts awarded under the competitive acquisition system. It would require prospective contractors to address before contract award how they would perform MTMC-specified technical or operational requirements. This would provide DOD the opportunity to assess a prospective contractor’s plan to improve the quality of the service DOD receives prior to contract award. It would give MTMC an opportunity to assess “best value,” that is, the ability to assess the trade-offs between price and technical factors. Awards would not have to be made on price alone. Therefore, we believe MTMC’s proposal would achieve the goal of quality service and best value to a greater extent than the industry proposal. MTMC had indicated that before any company is awarded DOD business, it wants to ensure that company has submitted a proposal indicating its “best value,” that is addressing the technical factors (e.g., how the offeror proposed to perform specified technical or operational requirements), identifying its past performance, subcontracting plan, and price. MTMC said that it anticipated that price would be less important than the other factors combined. Award would be made only to responsible offerors whose offers conformed to the solicitation and represented the best overall value to the government, price and other evaluation criteria considered. Both MTMC and industry agreed that in order to obtain quality service, there would be a need for longer term binding arrangements. In the current system, rates are re-bid every 6 months, and there are periods within each rate cycle when rates can be canceled. However, there was no agreement on the exact length of the longer term, nor on the type of binding arrangement. MTMC originally proposed establishing fixed prices for 1 year, with option years. Industry proposed 1 year with no options, plus the opportunity to cancel rates or meet other contractors’ rates at the 3-month point of the year-long price cycle. MTMC wanted multiple award, fixed-price, indefinite delivery/indefinite quantity-type contracts with a base and several option years awarded under the FAR, whereas industry wanted continuation of the current non-FAR arrangements with modifications. Industry’s proposal defines “best value” in terms of ranking carriers and forwarders on the basis of price and performance. It would require MTMC to develop a best value score for each carrier wanting to participate in the program. The contractor’s “best value” score would be based 30 percent on price and 70 percent on customer survey. Traffic would be distributed to the top-rated 30 to 50 carriers and forwarders. The industry proposal would not be based on the competitive acquisition system but would use a government bill of lading to acquire the services in accordance with procedures contained in DOD and the General Services Administration transportation regulations. It would not require prospective contractors to address how they would perform MTMC-specified technical or operational requirements before contract award. Consequently, MTMC would not have opportunity to assess a prospective contractor’s plan to improve the quality prior to contract award and would limit MTMC’s ability to assess the trade-offs between price and technical factors. MTMC has stated that it is looking for administrative simplification of the program. This relates to simplifying the total process from arranging the movement to settling the claim. Elements of both proposals offer some simplification. For example, both proposals price services on the basis of most corporate move contracts (percentage discount off industry’s Domestic Commercial Tariff for domestic household goods shipments and single factor rates for international household goods and unaccompanied baggage shipments). They agreed to simplify the pricing of certain accessorial services. For reasons described below, MTMC’s proposal meets this goal to a greater extent than the industry’s proposal. MTMC’s program is a standardized, domestic and international program. Industry’s proposal is composed of separate domestic and international programs. MTMC proposed to have offerors submit prices and fix them for at least 1 year. Industry proposed offering prices that could be changed or canceled. In industry’s domestic program, prices would be established for 1 year, effective January 1 of each year, with specific escalation provisions to account for significant increases, such as fuel costs, insurance, containers, and labor costs. The proposal also included allowing prices to be re-submitted as “best and final” on April 1 of each year. In industry’s international program, industry proposed to allow for increases 6 months into the contract period to compensate for currency exchange adjustments. We have previously urged DOD to take the actions it is proposing here, such as eliminating the frequent rate re-solicitations. In a previous report, we recommended that MTMC replace or modify the two-phase (me-too) domestic household goods bidding system so that all carriers have incentive to initially bid the lowest possible rates. We also noted that as a result of the current acquisition process, the domestic segment of the industry had created many paper companies that significantly added to DOD’s workload but did not increase industry operating asset capacity. The MTMC proposal would implement our recommendation and limit the participation of paper companies through the use of the competitive acquisition system and provide for simplification. The carrier industry acknowledges that nearly half of the currently approved interstate carriers may be paper companies. Its proposal states that it will eliminate from the domestic program the many paper companies that do not provide “legitimate capacity,” but would still require MTMC to determine what is “legitimate capacity.” MTMC anticipates making awards to fewer contractors and basing the system on fewer, more consolidated traffic channels. Currently, in the domestic program, each of the roughly 170 U.S.-located shipping offices has to maintain a traffic distribution roster for every traffic channel, or destination state. Each channel can involve several hundred carriers or forwarders. Industry suggests a distribution system that involves fewer companies on each channel, but the numbers would still involve 30 to 50 companies. Neither proposal specifically addresses the numbers of staff and other resources needed to implement them. There is no way to tell from the proposal specifically how many people would be involved in reviewing the proposal, how many people or resources are needed to handle the rate solicitation process or any specific traffic distribution roster system. Accordingly, our analysis is necessarily limited. However, we believe that MTMC’s proposal offers the greater opportunity to provide for administrative simplification because it (1) is a consolidated domestic and international proposal; (2) changes the rate solicitation process by eliminating re-solicitation; (3) provides for use of fewer companies to handle the traffic, necessitating less administrative effort for military installation traffic management personnel; and (4) relies on traffic channels that cover entire states. The industry proposal, though it improves on the current program somewhat, retains the rate re-solicitation process in both the domestic and international programs; continues the need to administer a large, complex traffic distribution roster process for every channel; and continues to base traffic channels on each individual military shipping office. MTMC has said that it is attempting to capitalize on the best applicable commercial business practices. This relates to adopting business practices characteristic of world-class customers and suppliers, such as using contractual arrangements to simplify contractor selection. For the following reasons, MTMC’s proposal meets this goal to a greater extent than industry. It would eliminate DOD-unique transportation regulations for the acquisition of services. As we noted earlier, the industry proposal, similar to the existing MTMC program, would not be based on the competitive acquisition system but would use a government bill of lading to acquire the services in accordance with procedures contained in DOD and the General Services Administration transportation regulations. In addition, in the past, we have recommended that DOD adopt commercial practices, such as using a smaller number of carriers to achieve quality and cost benefits. In the personal property program, we note that MTMC has approved more than 1,100 motor van carriers and regulated forwarders to handle its domestic moving needs. It has more than 150 forwarders at its disposal for its international traffic. All military shipping offices have to spend considerable time and effort to allocate a relatively small number of shipments to an enormous number of carriers. Fort Bragg, North Carolina, a typical example of the roughly 170 shipping offices in the contiguous 48 states, is serviced by more than 200 different domestic movers, more than 160 international forwarders, and 50 local carrier/forwarder agents. It has on average about 100 domestic and 40 international household goods shipments a week, moving in roughly 50 domestic and 30 international traffic channels, each requiring a separate shipment distribution roster. Some carriers and forwarders get but one shipment a week, if that. Many of the companies that get a shipment are “paper companies” that provide DOD no new operating asset capacity but were formed by their parent company to increase the parent company’s market share of the DOD business. The administrative effort does little to improve the quality of life for the servicemember and his or her family. In the same report, we recommended greater use of corporate practices that promote use of contractual arrangements to simplify the carrier selection. This could lead to more stability and provide leverage leading to cost efficiencies for both the carriers and DOD. MTMC has long been concerned about having the necessary capacity to meet DOD’s moving needs. There was no consensus, however, as to how to achieve the goal. MTMC is looking for commitment from the contractors to meet their needs, particularly during peak shipping periods. Over the years, there have been many examples of carriers and forwarders not being able to provide services when needed. MTMC’s proposal, we believe, provides the greater opportunity to meet this goal than does the industry proposal because it (1) would involve the award of contracts that would obligate the contractors to provide specific minimum capacity and (2) would not limit participation in the program to only licensed carriers or forwarders. MTMC’s proposal would allow any company, whether carrier, forwarder, relocation company, or anyone else, to participate. Relocation companies stated that they are prepared to make capacity available to DOD as needed. Industry’s proposal specifically excludes relocation company participation unless such companies are licensed carriers or forwarders. Carrier/forwarder industry officials state that MTMC’s proposal with regard to noncarrier/forwarder relocation company participation sets bad public policy and raises serious legal questions. Under such a system, a relocation company, with legal status as a broker, could be awarded a prime contract to effect the moves from a given base or locality. It would be the responsibility of that company to secure the services of carriers to perform the actual packing and moving services under the contract. Industry believes that a federal agency purchasing goods or services should contract only with entities actually providing those goods or services. Allowing relocation companies to compete for prime contracts, industry argues, would create logistical problems and raise questions concerning possible violations of the Anti-Kickback Act of 1986, 41 U.S.C. 51-58 and antitrust laws. As previously discussed, we believe MTMC’s proposal to meet its goals has the potential for eliminating paper companies and opens the way for more competition among companies having or bringing to DOD actual capacity. It does not appear that MTMC wishes to restrict competition. The competitive acquisition system that MTMC proposes to use requires, as a general rule, that DOD obtain full and open competition in its acquisitions (10 U.S.C. 2304). Concerning the potential for legal problems, the propriety of the relationship between firms participating in an acquisition as prime contractor and/or subcontractor is governed by the particular facts and circumstances in the context of the applicable laws. We are unable to determine the extent that either proposal provides or does not provide opportunity for small business to participate in the personal property program (goal 9). As was pointed out during the DOD/industry working group meetings, opportunities for small business and the impact on small business is difficult to assess or measure. The moving industry is made up of both large and small businesses, with many different types of organizational structures. The majority of moves are handled by the large business, van lines, but the work itself—packing and unpacking of the household goods, the loading and unloading of the trucks, and the actual truck driving—is done by small businesses, some independent and some part of the van line. In addition, our data indicate that there are about 25 major, nationwide van lines; a thousand independent van lines; several hundred freight forwarder moving companies; about 4,500 agents; and thousands of owner-operator truckers. In some instances, the agents actually own the major van lines. In other instances, the agents are independent companies working for the van lines. More recently, the industry has expanded to include relocation companies that handle the moves as part of a total package relocation service. On April 17, 1996, as directed by the Fiscal Year 1996 Defense Appropriations Bill Conference Report (House Conference Report Number 104-344), MTMC reported on the impact of the reengineering program on small business. It said that it believed small businesses can reasonably be expected to fare as well or better than they do in the existing program. The reason, it said, was that MTMC’s program would provide small businesses additional protection and opportunities, based on the establishment of subcontracting goals. However, the extent that small business is impacted remains a concern to the Congress and the industry because of the many uncertainties involved in implementing a new program. The two sides agreed, however, to reduce the size of traffic channels for the test, at least in part, to allow for greater participation of small business as prime contractors. MTMC had originally wanted contractors to submit offers by regions (4 in the contiguous 48 states). For the pilot test, MTMC significantly decreased the size of the contract area, from regions to states. The pilot test includes three states—North Carolina, South Carolina, and Florida—and although contractors will be required to serve all points within a state, they can offer on any or all of the other 13 regions into which MTMC has divided the country. Furthermore, the test includes only 50 percent of the shipments from those states and only certain types of shipments. Intrastate and local shipments, for example, are not in any test plans. Industry preference is for traffic channels much as the current system exists, where traffic channels are based on personal property shipping offices (presently, more than 150 in the contiguous 48 states). MTMC officials state that if a small business is intimidated by the size of the contracts, it can participate as a subcontractor of a large company or of another small business. MTMC indicated that for purposes of its proposal, small business would be defined as any company with annual receipts less than $18.5 million. The carrier association officials, however, do not believe that subcontracting counts toward this goal. Accordingly, the association officials believe that the MTMC proposal, by relegating small business to a subcontractor role, would reduce the number of small business prime contractors, resulting in the goal not being met. DOD’s position is based on the opportunity to compete, not numbers. We based our assessment on the opportunity to compete. Under MTMC’s proposal, contracts for transportation services will be awarded under the competitive acquisition system. The requirements of the Small Business Act, 15 U.S.C. 631, et seq.; FAR part 19; and the applicable part of the Defense Federal Acquisition Regulations Supplement will apply to these acquisitions. These provisions include such matters as subcontracting plans for the utilization of small, small disadvantaged, and women-owned small business, and set-asides for small business. Therefore, the protection for small business appear to reside in the proposed MTMC plan as it would in any other contract awarded under the government’s competitive acquisition system. We support moving forward with the pilot test of a reengineered personal property program because it will provide the necessary data to ultimately design an improved system. MTMC’s proposal represents a collaborative effort to a large degree between DOD and industry and, as such, provides the better opportunity to achieve the program goals. In addition, it is important that performance standards be developed and data gathered in a way that enables measurable results of the program, particularly as they relate to quality of service and small business participation. We recognize that our assessment of the extent to which the proposals met the program goals required judgments about likely outcomes and that only actual data can determine with greater certainty the impact of the proposals. If the Congress still has concerns about the impact on small business, piloting both proposals is an option. However, doing this would likely place an additional administrative and costly burden on MTMC and could delay implementation of the program. We asked DOD, the four carrier associations—the American Movers Conference, the Household Goods Forwarders Association of America, the National Moving and Storage Association, and the Independent Movers Conference—and the Military Mobility Coalition to comment on a draft of this report. Our reporting time frames necessitated that we meet with each group and obtain only their informal oral comments prior to the issuance of the report. All expressed concern about the short time frame provided for preparing their comments. We acknowledged that this was the case and agreed to include their informal comments in this report and encouraged them to provide any additional comments as appropriate. DOD officials agreed with our analysis of the proposals and the facts in the report. However, they strongly disagreed with our interpretation of what MTMC’s proposal represents and the option we suggested to pilot both proposals. According to DOD officials, the proposal submitted for review to us from the DOD/industry working group represents the collaborative product of the working group as indicated by a consensus list signed by the industry representatives. Thus, they believe that MTMC’s proposal represents a joint DOD/industry proposal. DOD officials stated that testing the independent industry proposal would be a disservice to the collaborative process and would obviate the instructions of the congressional defense committees to reach agreement on a single plan. Moreover, DOD officials stated that if directed to pilot test the industry proposal in addition to the industry/DOD proposal, DOD would want to test it against MTMC’s original proposal. Furthermore, they expressed concern that testing of the industry proposal would further delay their effort to improve the quality of service and reduce the $100 million annual claims for loss and damage now being experienced by military members and their families. DOD officials also stated that they do not have enough detail on the industry proposal to go forward without significant delay. They said that the industry proposal was not debated during the working group meetings; consequently, a number of areas are unclear, vague, and ambiguous from their point of view. Further, they were concerned that the industry proposal would be technically and operationally difficult to implement, costly to administer, and cumbersome for installation transportation officials to handle simultaneously with the other pilot. Moreover, DOD officials stated the industry proposal would not provide the opportunity to improve quality of service, which is one of the primary goals of the reengineering effort. “The Working Group has agreed to disagree on one major area: our plan to use Part 12 of the Federal Acquisition Regulation (FAR) as the basis for our projected contracts. . . . MTMC respectfully disagrees with industry and proposes to use the FAR to obtain the benefits of free and open competition for the government and our military service members. . . . The House/Senate Conference Committee on National Defense included language in the 1996 Defense Appropriations Bill Conference Report (House Report 104-450) directing MTMC to test its concept for improved service by conducting a Pilot Program. We are incorporating ideas from the industry/DOD consensus, and propose to begin the test in the immediate future.” The four carrier associations and the Military Mobility Coalition had differing opinions on our report. The American Movers Conference, the Household Goods Forwarders Association of America, the Independent Movers Conference, and the National Moving and Storage Association disagreed with our analysis of the proposals in each area where we stated that MTMC’s proposal would likely achieve the goals to an unknown extent (goal 9) or to a greater extent (goals 1, 5, 7, 8, and 10) than the industry proposal. The Military Mobility Coalition, however, agreed with our analysis of the proposals. In addition, the carrier associations strongly supported the option we presented as a matter for congressional consideration to pilot both proposals. They said that an advantage to piloting both proposals would be to obtain with certainty the impact of the proposals on small business participation. They added that to pilot their proposal should not be difficult to implement and stated that they would be willing to work with DOD to help implement a dual pilot. However, the Military Mobility Coalition officials expressed concern about the time it would take to set up and run two pilots, the significant administrative effort that would be required, and the limited value such a test would yield. The Coalition believes that the carrier association’s proposal is so similar to the structure of the current program that it negates the need for a pilot program. The following are key points provided by the four carrier associations where they disagreed with our analysis of the proposals. Most of the concerns raised by the four carrier associations were regarding MTMC’s proposal, our characterization of the industry proposal, and our assessment of the proposals. We have revised the report to reflect their concerns, provided additional information to support our position, or clarified the position of DOD and industry, as appropriate. Regarding our analysis of the goal to provide opportunity to small business to participate as prime contractors (goal 9), the carrier associations stated that they believed we had sufficient information to conclude that small business would be negatively impacted under MTMC’s proposal. They took issue with MTMC’s conclusion that the small business goal would be met through small business competing as either subcontractors or prime contractors. The carrier associations point out that the stated goal relates to participation of small business concerns as prime contractors. Accordingly, the associations state that MTMC’s proposal, by relegating small business to a subcontractor role, would substantially reduce the number of small business prime contractors and therefore, would not meet the stated goal. As we stated, there was insufficient data for us to assess this area. However, we revised the report to more fully discuss the carrier associations’ concerns. Regarding our analysis of the goal to ensure capacity to meet DOD’s needs (goal 8), the carrier associations stated that the industry proposal would not limit new capacity, it would only limit companies not properly licensed as carriers or forwarders from participating in the program. They also argue that MTMC’s proposal would be too complicated to successfully guarantee adequate capacity and would reduce capacity by reducing the number of service providers with assets. The Military Mobility Coalition countered that many in the moving industry do not now participate because of the current cumbersome methods, but would enter the program under the MTMC proposal. Our overall basis for favoring MTMC’s proposal in this area was based on the fact that contractors would be required to commit minimum capacity and participation of contractors would not be limited to licensed carriers and forwarders. The four carrier associations provided us no new information to change our view in this area. Regarding our analysis of the goal to simplify the system (goal 7), the carrier associations stated that we limited our analysis only to certain aspects of simplification and did not consider, in their opinion, the complicated systems and processes that would be added under MTMC’s proposal. These included the complex method MTMC proposed to allocate traffic, bid on channels, and use the FAR. The associations stated that the MTMC-proposed program would become administratively cumbersome if expanded worldwide. The Military Mobility Coalition, having operated under competitive FAR procedures, believes the FAR is less cumbersome than contracting with thousands of individual carriers, which occurs under MTMC’s current operating system and the carrier associations’ proposal. Regarding the carrier associations’ concerns, we added information on why we believed MTMC’s proposal better met this goal, particularly as it relates to simplifying the rate solicitation and traffic distribution processes. In addition, we explained that the proposals do not specifically address the numbers of staff and other resources needed to implement them, limiting our analysis. Thus, we focused on the extent that the proposed process changes would simplify traffic management processes. Finally, we pointed out that industry’s proposal represents two separate programs, as opposed to MTMC’s single program, for handling both domestic and international traffic. Regarding the goal to adopt corporate business processes (goal 5), the carrier associations stated that using the government competitive acquisition system, the FAR, and other practices proposed by MTMC does not represent corporate business practices. We agree that the FAR is not used in the corporate world. However, we believe MTMC’s proposal moves closer toward adopting corporate business practices, such as using contractual arrangements to simplify the carrier selection process. Regarding the goals to provide quality service and best value (goals 1 and 10), the carrier associations noted that awarding contracts for these services pursuant to the FAR would involve the evaluation of complex proposals that must be prepared by the competing firms. According to the association officials, such proposals are best prepared by large companies, and there is not always a direct relationship between well-written proposals and actual quality service. The Military Mobility Coalition pointed out that small businesses in this carrier field can have annual receipts up to $18.5 million and should be able to handle preparing proposals. Given the conflicting views, we have no basis for judging the extent to which proposal preparation would or would not be a problem. This type of issue illustrates why we strongly support a pilot program. The carrier associations pointed out that our report in many places referred specifically to the domestic program and was silent about issues surrounding the international program and the impact of the MTMC pilot program on international service providers. We have revised the report to more fully discuss the international aspect of the industry proposal. Other comments were provided to us that clarified or corrected our characterization of the industry proposal. We incorporated, as appropriate, these comments into the report. For example, we added that the industry proposal actually is composed of two programs—one for handling domestic traffic and another for international traffic. In addition, we clarified that the industry proposal modifies the current system, provides for selecting carriers on quality as well as price, and has features that address the problem of paper companies. According to the four carrier associations, the specific reasons relied on for their position is contained in the Industry Critique of MTMC’s Proposed Pilot Program for Domestic and International, signed by American Movers Conference and the Household Goods Forwarders Association of America and agreed to by the Independent Movers Conference and the National Moving and Storage Association. At their request, the document provided by the carrier associations giving more detail on their position is included as appendix II. The diverse nature of the comments illustrates the difficulty of assessing the two proposals and making the judgments when precise data is absent. We believe that our assessment of the extent to which each proposal meets the program’s reengineering goals is appropriate. We have revised the report to better reflect the content of both proposals and specific points made by the commenting officials. Overall, we continue to believe that MTMC’s proposal provides a greater opportunity than the industry proposal to achieve the program goals and that the pilot should not be delayed any further. The source proposals for our analysis were 1. MTMC’s “Draft Request for Proposal Summary, Reengineering the DOD Personal Property Program,” dated June 24, 1996, as clarified in DOD correspondence, position papers, and white papers distributed to the working group members over the period of the working group meetings held through September 16, 1996. 2. The “Joint Industry Proposed Alternative Plan to MTMC’s Re-Engineering of the Domestic and International Personal Property Programs,” dated June 24, 1996, and signed by the presidents of the four moving industry carrier associations—American Movers Conference, the Household Goods Forwarders Association of America, the National Moving and Storage Association, and the Independent Movers Conference—as revised in an American Movers Conference and Household Goods Forwarders Association of America document entitled “Industry Alternative Pilot Plan for MTMC’s Domestic and International Personal Property Program,” dated October 25, 1996. Since MTMC and industry could not agree on a single approach to the pilot test, we analyzed the two approaches. As discussed with your office, we agreed to use the source proposals described above as the basis for analyzing the pilot test approach. The program goals were those developed at the June 10, 1996, working group meeting and agreed to by a September 16, 1996, DOD and association-signed document entitled TRANSCOM/MTMC/Industry Reengineering Personal Property Working Group Consensus List. Our analysis was based on the review of the proposals; examination of the transcribed record of the working group meetings; review of correspondence of both sides relative to the two proposals, points of clarification, and statements of disagreement; reference to our prior reports and findings on the subject area; research and analysis of the applicable procurement statutes and DOD and the General Services Administration transportation procurement and traffic management regulations; analysis of data related to the moving industry and small business affairs, not necessarily discussed at the working group meetings; and follow-up discussions with officials in DOD and the moving industry who attended the working group sessions. Our analysis of the reengineering initiative was conducted between June and November 1996. Since agreement could not be reached on a mutually acceptable proposal to pilot test, we began assessing in October 1996, the separate DOD and industry proposals. Our assessment of the specific proposals was conducted during a 30-day period as specified in the House and Senate reports accompanying the National Defense Authorization Act for Fiscal Year 1997. Our review was performed in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense; the Commander-in-Chief, U.S. Transportation Command; the Commander, MTMC; the American Movers Conference; the Household Goods Forwarders Association of America; the National Moving and Storage Association; the Independent Movers Conference; and the Military Mobility Coalition. We will also make copies available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. On June 21, 1994, the Deputy Commander-in-Chief, U.S. Transportation Command, directed the Military Traffic Management Command (MTMC), the Army component of the U.S. Transportation Command and program manager for the Department of Defense (DOD) Personal Property Shipment and Storage Program, to reengineer the personal property program. On March 13, 1995, MTMC formally published a notice in the Federal Register of its plans to consider employment of full-service contracts to improve DOD’s personal property program. The notice highlighted the fact that the evolving defense environment encompasses a smaller uniformed force, less overseas basing, reduced funding, and diminished staffing of support activities. It indicated that these changes will directly affect quality-of-life issues. In light of these changes, the notice said MTMC is engaged in an effort to simplify current processes, control program costs, and ensure quality of service by reengineering the existing personal property program. It further indicated that the reengineering effort will adopt, to the fullest extent possible, commercial business processes characteristic of world-class customers and suppliers and relieve carriers of DOD-unique terms and conditions. It said it will also focus on the customer, reward results, foster competition, and seek excellence of vendor performance. The notice indicated that members of industry would be afforded an opportunity to comment on the draft solicitation and to attend the presolicitation and preproposal conferences. On June 30, 1995, MTMC released a written proposal to reengineer the personal property program. A notice of proposal was published in the July 13, 1995, Federal Register. A further statement of acquisition strategy was released to industry on July 31, 1995. On June 15, 1995, the House Committee on National Security reported that it, too, was convinced that DOD must pursue a higher level of service that moves toward greater reliance on commercial business practices, including simplified procedures. It directed that DOD undertake a pilot program to implement commercial business practices and standards of service. It asked for a report from DOD on this by March 1, 1996. On October 11, 1995, MTMC testified on the reengineering effort before the House Committee on Small Business. MTMC discussed the impact on small business and its rationale for planning to award contracts for the new program under the Federal Acquisition Regulation (FAR). Carrier and forwarder industry officials also testified at this hearing. In September 25, 1995, and November 15, 1995, reports accompanying the conference report on the Fiscal Year 1996 Defense Appropriations Bill, congressional managers directed that prior to implementing any pilot test, DOD report on the program’s impact on small business resulting from the application of the FAR and any requirements that were not standard commercial business practices. DOD responded with reports dated January 1996 and April 1996. In a May 7, 1996, report accompanying the National Defense Authorization Act for Fiscal Year 1997, the House Committee on National Security stated that after reviewing the reports, it was still concerned that MTMC’s pilot program did not satisfactorily address issues raised by the small moving companies comprising a majority of the industry. The Committee, therefore, directed the Secretary of Defense to establish a working group of military and industry representatives from all facets of the industry to develop an alternative pilot proposal. The instructions were that the working group would be chaired by the Commander, MTMC; include those DOD representatives the Chairman deemed necessary (not to exceed six in number); and include an industry delegation to be represented by no more than six people, including one each from the American Movers Conference and the Household Goods Forwarders Association of America. The Committee asked that the working group submit the alternative proposal, along with the current pilot proposed by MTMC, to us for review. The Committee further directed that we report to the congressional defense committees the results of our review. The report said that DOD may not proceed with the formal solicitation for, or implementation of, any pilot program prior to August 1, 1996. Similar instructions were contained in the May 13, 1996, Senate report accompanying the National Defense Authorization Act for Fiscal Year 1997. The congressionally directed working group of DOD and industry officials met over a period of 3 months beginning in June 1996 and ending in September 1996. In six sessions—9 days (June 10, July 1-2, July 18-19, August 14, September 5-6, and September 16)—representatives of MTMC, the U.S. Transportation Command, DOD, and various segments of the moving industry, including the American Movers Conference, the Household Goods Forwarders Association of America, the National Moving and Storage Association, the Independent Movers Conference, the Military Mobility Coalition, and a DOD-invited group of auxiliary members from the moving industry met in a formal group setting to forge a plan for a pilot test. We and the Army Audit Agency attended as observers. The meetings were chaired by the Commander, MTMC, and led by a DOD-provided facilitator. All meetings were transcribed and made available to anyone in the industry or the interested public through MTMC’s Internet Web page. All written correspondence and position papers were also made available on the MTMC Internet Web page. At the first meeting, the Chairman reported that the objectives were to meet the intent of the Congress for developing an alternative program that could be reported to the Congress and to establish a forum for industry and DOD to forge agreement on a single program for the pilot test. MTMC explained its proposed pilot plan; laid out the program goals, which were to dramatically improve the quality of personal property shipment and storage services provided to military servicemembers or civilian employees and their families when they are relocating on U.S. government orders and to simplify the administration of the program, capitalizing on the best applicable commercial business practices characteristic of world-class customers and suppliers; and asked for industry comment. After the first meeting, goals for the program were announced. These goals and various issues were discussed and refined throughout the meetings. Also, at the initial meeting, MTMC announced that it was not going to release a formal request for proposals but instead have industry submit for discussion any alternative plan they might wish to offer. MTMC also agreed to provide for clarification its previously proposed plan. Industry and MTMC offered proposals on June 24, 1996. Both, and others, as desired, offered comments on the proposals on June 27. These two proposals served as a framework, or center of discussion, for reaching or attempting to reach, a single, mutually acceptable plan for testing. In the end, on September 16, 1996, DOD and industry could not reach agreement on any single plan. At the final meeting, representatives of DOD and industry signed a document called a consensus list, on which the goals and points of agreement reached by the working group were stipulated. On October 1, 1996, the Commander of MTMC and the joint working group chairman wrote us on the status of reengineering effort and work of the group. The Chairman indicated that the group had come to a consensus of many issues but had agreed to disagree on one major area: MTMC’s plan to use part 12 of the FAR as the basis for its projected contracts. The Commander indicated that MTMC planned to move forward with a test by releasing a request for proposals in November 1996 and making contract awards in January 1997. On October 10, 1996, the American Movers Conference wrote us expressing its concerns about the adequacy of MTMC’s October 1 letter in providing us information to use in evaluating MTMC’s proposed plan. The Conference indicated that there were other areas of disagreement than the FAR and that it believed that MTMC had tried to cover up these areas of disagreement and emphasize instead the minor points of agreement. These other areas included MTMC’s guaranteeing capacity (minimums and maximums), distributing shipments to contractors, impact of MTMC’s decision to permit relocation companies to participate in the program, rules governing payment for storage-in-transit, and the number of contracts that ultimately would be awarded. The Conference indicated that it was planning to submit a more detailed industry plan for our review. On October 25, 1996, the American Movers Conference and the Household Goods Forwarders Association, in a joint letter, submitted their views of MTMC’s reengineering proposal to date. They provided an industry critique of the MTMC proposal and the industry alternative plan. The proposal provides for small business participation, program simplification, best value, and the elimination of paper companies. The associations said that while they are supportive of any effort to improve the existing program, they believe that there are legitimate concerns that must be adequately addressed before this program can proceed. John G. Brosnan, Assistant General Counsel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Military Traffic Management Command's (MTMC) and the moving industry's proposals for reengineering the Department of Defense's (DOD) personal property program, focusing on the extent to which each proposal met DOD/industry goals for a reengineered personal property program. GAO found that: (1) GAO's assessment shows that MTMC's proposal meets the goals for reengineering the personal property program to a greater extent than the industry plan; (2) both proposals are likely to equally achieve several of the 10 goals of the program, but overall, MTMC's proposal appears more likely to achieve the program goals to a greater extent; (3) MTMC's approach to providing quality service would give DOD the opportunity to assess a prospective contractor's plan to improve the quality of the service prior to contract awards; (4) this would enable MTMC to determine best value to the government by assessing the trade-off between price and technical factors; that is, award would be made only to responsible offerors whose proposals represent the best overall value to the government in terms of: (a) the offeror's proposed approach to performing the work; (b) past performance; (c) subcontracting plan; and (d) price, which would be one evaluation criterion and would not provide the primary basis for award; (5) GAO believes determining best value is an essential element of providing higher quality service to servicemembers; (6) the industry's proposal, which provides for selecting contractors initially on price, then quality after the carrier or forwarder has already handled DOD traffic, does not provide for assessment of quality up front using the criteria MTMC has proposed to use; (7) MTMC's approach to simplifying the system and adopting corporate business practices would enable DOD to dramatically reduce the number of contractors it must use, which would simplify contractor selection and could lead to more stability and provide leverage leading to cost efficiencies for both contractors and DOD; (8) the industry's proposal, though it changes the existing program to some extent, still retains a process in which DOD has to distribute traffic to many different carriers and forwarders; (9) overall, GAO believes that MTMC's proposal provides a greater opportunity than the industry proposal to achieve the program goals; (10) GAO supports moving forward with the pilot test without further delay, since a pilot test is essential to gathering the necessary data to ultimately design the reengineered personal property program; (11) in addition, it is important that performance standards be developed and data gathered in such a way to ensure measurable results of the pilot, particularly as it relates to quality of service and small business participation; and (12) if the Congress still has concerns about the impact on small business, piloting both proposals is an option; however, doing this would likely place an additional administrative and costly burden on MTMC and could delay implementation of the program.
Consumers access television through two principal types of media. The first is through local broadcast television stations, which provide free over- the-air programming for reception in consumers’ households by television antennas. Many local stations are affiliated with major broadcast networks, while others are independent stations that are not affiliated with a broadcast network. The second platform for accessing television services is through MVPDs, which are cable, satellite, or telecommunications companies that provide services through a wired platform or via satellite and charge their customers a subscription fee. MVPDs’ programming includes so-called “cable” networks, such as CNN or ESPN, and also the local stations, which they carry or retransmit through agreements with the local stations. These two forms of media have some similarities but also key differences, as described in table 1. Television advertising time may be sold at either the national or local level. National advertising time is sold by broadcast or cable networks— both of which produce and aggregate programming that will be aired nationally—to advertisers looking to reach audiences across the country. The advertisements are inserted with the programming that broadcast and cable networks provide to local stations and MVPDs, respectively. In contrast, local advertising time is sold to companies wanting to reach a local audience. This advertising time may be sold to local businesses— such as a car dealership or restaurant—or it may be sold to a national business—such as a car manufacturer or a large restaurant chain—that is purchasing local advertising time to reach a particular local audience. While local stations or MVPDs often directly sell local advertising time to local businesses, national advertising sales representatives may arrange the sale of local advertising time to national businesses. The amount of advertising time available for a local station to sell during a given hour depends on the type of program airing at that time. Specifically, a local station sells all of the advertising time during the programming it produces, such as local news. In our 2014 report on media ownership, we found that advertising aired during local news in particular represents a substantial portion of a broadcast station’s revenue. A local station also sells a portion of the advertising time during programming it receives from its affiliated broadcast network, generally about 2 ½ to 3 minutes per hour, and may sell a portion of the time during syndicated programming, which is programming such as game shows and reruns produced nationally but aired on a station-by- station basis. Advertising on cable networks is mostly sold by the cable networks themselves; however, MVPDs also sell a small portion of the advertising time on the cable networks that they distribute, about 2 minutes per hour according to MVPDs we spoke to for this report. No advertising time on the local stations that MVPDs retransmit is available to MVPDs to sell, nor do MVPDs sell advertising time on premium cable channels that do not carry advertising, such as HBO. Table 2 presents example scenarios of how local and national advertising is sold and shown through both local stations and MVPDs. This table does not include information about how joint sales agreements among local stations and interconnects among MVPDs affect local advertising sales, which is discussed later in the report. Some broadcast stations and MVPDs have entered into agreements regarding the joint-selling of advertising, which have become more prevalent in recent years, according to media-industry stakeholders we spoke to. JSA: An agreement between local stations in which one station is authorized to sell advertising time on the other station. We refer to the station selling the advertising as the “sales-agent station” and the station that turns over its advertising time to be sold by the other station as the “customer-station.” JSAs are specifically defined by FCC rules. Interconnect: An arrangement among MVPDs in the same market in which one MVPD—typically the largest MVPD in the market—sells a portion of the local advertising time for all MVPDs participating in the interconnect and simultaneously distributes that advertising across all such MVPDs in a coordinated manner. Although interconnects were traditionally an arrangement between cable providers, in recent years, telecommunications and satellite providers have also participated in them. Interconnects are not defined by FCC. Some local stations also have other agreements, called shared service agreements, for sharing other functions such as news production, administrative, and operational services. For example, stations can enter into an agreement to share news-gathering resources, such as helicopters, reporters, and cameramen, or can enter into an agreement wherein one station produces another station’s local news. Our 2014 report on media ownership found that stations may have several agreements in place, such as a shared service agreement and JSA, or a single agreement that includes components typical of different types of agreements. Stations are not required to disclose shared service agreements in their public files. However, in 2014, FCC proposed new rules that would define shared service agreements and require their filing. In addition to television, there are a number of other outlets competing for advertisers looking to purchase advertisements in local markets, including radio, print media (such as newspapers and magazines), out-of-home advertising (such as billboards or advertising on buses), and Internet- based media (such as advertising through mobile devices or on websites). Local media advertising generated approximately $136 billion in revenue in 2014, a slight decrease from the $139 billion (2014 dollars) in 2011, according to data from BIA/Kelsey, a media research and consulting firm. The geographic scope of local media markets are defined by Nielsen, a company that measures television viewership—a critical metric for determining advertising rates. Nielsen has divided the country into 210 local television markets, known as DMAs, ranked in size from the largest (New York, N.Y.) to smallest (Glendive, Mont.). Based on information from the stakeholders we interviewed about these markets, we will refer to the 25 largest markets (those ranked 1 through 25) as “large” markets, those ranked 26 through 100 as “medium” markets, and those ranked 101 through 210 as “small” markets. FCC assigns licenses for local stations to use the airwaves on the condition that licensees serve the public interest. FCC’s regulation of local stations is guided by long-standing policy goals to encourage competition, diversity, and localism. To advance these policy goals, and based on statutory requirements to serve the public interest, FCC has implemented rules that limit the number of stations an entity can own or control locally and nationally. Under FCC’s ownership rules, a single entity can own two local stations in the same DMA if the relevant service contours—the boundary of the area a station serves—do not overlap or, if they do overlap (1) at least one of the stations is not ranked among the top-four stations in terms of audience share and (2) at least eight independently owned and operating full-power commercial or noncommercial television stations would remain in the DMA. Because larger markets tend to have more stations, this limit tends to affect smaller markets more than larger ones, since an owner of a local station interested in acquiring a second station in a small market would be less likely to be able to meet these criteria. As previously discussed, FCC is required by statute to review its media ownership rules every 4 years and determine whether any such rules remain necessary in the public interest. FCC’s most recent review related to its media ownership rules was completed in 2008. FCC has also noted that arrangements other than outright ownership could exert similar influence as ownership. To address such issues, FCC developed attribution rules to determine what interests should be counted when applying these media ownership limits. In 2004, FCC sought comment on whether the use of certain television JSAs warranted attribution—that is whether the customer-station should be counted or attributed to the sales-agent station that sells advertising for that station for the purpose of applying FCC’s media ownership limits. FCC sought additional comment on this issue in its 2010 media ownership review. In 2014, FCC promulgated a final rule declaring that if a JSA provides that one station sells more than 15 percent of the weekly advertising time of another station located in the same market, both stations will be counted toward the ownership limit of the owner of the station selling the advertising (i.e., the sales-agent station). The rule is currently in effect; stations with JSAs existing at the time FCC issued the rule that result in violations of FCC’s media ownership limits have until October 2025 to amend or void their agreements or otherwise come into compliance with FCC’s ownership rules. As previously discussed, the rule is subject to ongoing litigation. FCC also has rules for stations to file certain documents with FCC and to maintain public inspection files that include documentation about the licensing and operation of each station. According to FCC, the purpose of the public inspection files is to make information more readily available that the public already has a right to access so that the public will be encouraged to play a more active part in dialogue with broadcast licensees. These rules include requirements for stations to file JSAs with FCC if the JSA is attributable under FCC’s attribution rules and to file all current JSAs, regardless of attribution status, in the stations’ public inspection files. Local television stations’ public inspection files are available online through FCC’s website. We found 86 JSAs among local-station owners in our review of JSAs available from stations’ online public inspection files. A little more than a third of DMAs had a JSA, with 98 percent of JSAs we identified being among stations in medium or small markets (see table 3). The 86 JSAs we reviewed generally covered similar terms. We identified the following key provisions in our review of these agreements: Advertising time: The JSAs specified the advertising that the sales- agent will sell. In most cases, this included all of the customer- station’s advertising time, including local advertising that airs during programming of the local station as well as advertising on the customer-station’s website. Four of the JSAs were created since FCC promulgated its 2014 JSA rule, and these JSAs specified that the sales-agent would sell no more than 15 percent of the customer- station’s advertising time. This 15 percent is the threshold FCC set in the JSA rule that, if exceeded, would trigger attribution under FCC’s ownership rules. Station identification: All of the JSAs identified the owners of both the sales-agent and customer-stations covered in the agreements and the call signs of the customer stations. About two-thirds (60) of the JSAs also identified the call signs of the sales-agent stations, while a little less than one-third (26) of the JSAs did not identify the call sign of the sales-agent station. According to FCC officials, FCC’s rules do not prescribe the way in which parties to JSAs are identified in the agreement, and it is not required that JSAs specifically identify the call signs of the stations involved in the agreements under FCC’s rules. Time frame: The JSAs generally covered 5 to 10 years with extensions based on the consent of both parties. Stations also sometimes filed documents indicating their JSAs had been extended. Revenue sharing: About half (40) of the JSAs indicated that the sales- agent retains 30 percent of the advertising revenue, while some (9) indicated a different percentage, a flat fee, or a commission. The remaining 37 JSAs had no information about revenue sharing because that information was either redacted or not provided. Control and responsibilities: All of the JSAs specified that the customer-station retains complete control of the station, including control over the operations, finances, personnel, programming, and responsibilities to meet FCC requirements. Shared service agreement: Some of the JSAs included provisions typical of a shared service agreement and were characterized as both a JSA and a shared service agreement. FCC officials and stakeholders told us that JSAs and shared service agreements typically go together, and local station owners said it would be uncommon for a station to have a JSA without an accompanying shared service agreement. However, one station owner told us that its station has a JSA and no shared service agreements. Programming: About one-third (26) of the JSAs reviewed included a provision for the sales-agent to provide programming for the customer station, generally up to 15 percent of the customer-station’s broadcast hours per week. While reviewing JSAs filed in stations’ online public inspection files, we found that some JSAs were not filed in both a sales-agent station file and a customer-station file, as required by FCC rules. Specifically, of the 86 JSAs we identified, 23 of them were filed in a customer-station file but not a sales-agent station file and 2 of them were filed in a sales-agent file but not a customer-station file. In most of these cases (20 of the 25 JSAs), the stations involved in the JSAs were specifically identified by their call- signs in the JSAs; in the other 5 JSAs, only the sales-agent station’s owner was identified and we had to determine the sales-agent station by contacting the other station named in the JSA or by examining information about station ownership in the same market. Although there may be legitimate reasons for a JSA to be missing—for example, if the JSA had been terminated and removed from one file but not yet from the other—the extent of missing JSAs raises a concern that there may be JSAs that should be filed that have not been. As previously discussed, FCC’s rules require that all current JSAs be filed in stations’ public inspection files, which are available online through FCC’s website. The purpose of this requirement is to improve transparency of station operations for the public by making documents more readily available that the public already has a right to access. Further, according to FCC, the agency required stations to make this information available so that the public would be encouraged to play a more active part in dialogue with and oversight of broadcast licensees. Consequently, if interested parties look at the public file of a station involved in a JSA, they should expect to find that document in the file so that they may learn about how a station handles its advertising sales. If a station involved in a JSA does not have its JSA in its public inspection file, the transparency over this aspect of the station’s operations is lost. The standards for internal control in the federal government state that agency management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. As previously stated, FCC’s media ownership rules are meant to serve the public interest, and FCC’s recent determination that JSAs are a factor in assessing whether stations are in compliance with the ownership rules indicates that JSAs could have a bearing on whether stations are serving the public interest. Furthermore, in October 28, 2014, following the enactment of its JSA rule earlier in 2014, FCC released a public notice reminding stations of their obligation to file all current JSAs in their public inspection files, regardless of whether the station is the sales-agent or the customer-station in the agreement. The notice stated that a station’s failure to comply with this rule may result in FCC taking an enforcement action. FCC officials said they do not monitor the contents of stations’ public files on an ongoing basis and have not reviewed stations’ JSA filings to ensure they are complete and up to date. They added that they have never identified or compiled copies of JSAs from these publicly available sources, either for attributable JSAs or all JSAs, for this purpose. FCC officials said they typically review compliance with public inspection file requirements in connection with a station’s license renewal application or in response to complaints from the public. According to FCC officials, FCC’s most recent round of license renewal reviews did not turn up any missing JSAs, and there were no stations self-reporting or petitions from others alleging missing JSAs. However, if FCC has not compiled a list of existing JSAs, it is unclear how it would know whether a JSA is missing when reviewing a file. Additionally, FCC officials said that if they receive a complaint that a public inspection file is incomplete, FCC may investigate and take action, such as contacting the station licensee and instructing it to update the file or issuing an admonishment or fine, as appropriate; however, FCC officials said they have not received any such complaints. According to FCC officials, this compliance approach reflects the agency’s policy objectives of encouraging greater public participation in broadcast licensing. However, FCC’s approach puts the burden of discovering the incompleteness of an inspection file on the public, and in the case of a file that is missing a JSA, it is not clear how a member of the public would be likely to know that a JSA is missing without undertaking a review of all stations’ JSA files, as we did, in order to uncover whether any other station has put a JSA agreement with that station in its file. Furthermore, FCC officials said that they take seriously the obligation to ensure that licensees comply with FCC rules and to vigorously enforce violations of the public inspection file that are identified as a result of self- disclosure, public complaints, discovery by FCC staff, or in connection with a station’s license renewal applications. FCC officials said that the public inspection file is meant to assist the public specifically, rather than FCC. However, without examining public inspection files to determine if they are complete, FCC may not be fully aware of violations. If the public is to have the sort of dialogue with and oversight of stations that FCC suggests, then the public should have access to these documents through the public inspection files. FCC’s rules specify which documents should be included in the files, and FCC has the authority to take enforcement action against stations that do not follow its rules. Although FCC assured broadcasters that the online public file requirement would not lead to increased FCC scrutiny of the public inspection files, FCC has already reminded broadcasters of their responsibility to file JSAs in public inspection files and that FCC may take enforcement action if stations do not comply. If a member of the public is examining the file of a station that is involved in a JSA but the station has not put the JSA in the file, then it would not be apparent that the station’s operations involve another station as provided by a JSA. Specifically, this reduces the transparency around the station’s advertising sales, which is a principal source of station revenue. Public- interest stakeholders told us that lack of transparency regarding JSAs and other sharing agreements, particularly over what they do and who is involved, is a primary concern. If JSAs are missing from stations’ public inspection files, interested parties may not be able to access this critical piece of information about how stations operate in local markets. An interconnect involves two or more MVPDs combining a portion of their advertising time in a local market, creating a single-point for advertising sales across multiple MVPDs within a DMA (see fig. 1). Through technological means, these advertisements are then distributed simultaneously across the MVPDs participating in the interconnect. For example, MVPDs told us that an advertiser wishing to purchase advertising time in a local market on a particular cable-network show could, through a single transaction with an interconnect, arrange for its advertisement to air simultaneously during that show on all MVPDs in that market that are part of the interconnect. Without an interconnect, an advertiser wishing to reach the same audience would have to negotiate separate advertising time purchases for the same times and during the same shows with each MVPD in the local market. Although interconnects were originally created among cable companies, they have expanded in recent years to include telecommunications MVPDs. Additionally, MVPD stakeholders we spoke to explained that while satellite-based MVPDs do not directly participate in interconnects, satellite MVPDs have developed a way to insert local advertising into the programming they provide customers and have begun rolling this capability out in a limited number of markets. Through an arrangement with a cable-industry-owned advertising representation sales firm, advertisers can take advantage of this capability to reach satellite subscribers in some larger markets when purchasing local advertising through interconnects. Stakeholders including two MVPDs, two MVPD advertising representation firms, three industry associations, one broadcast station owner, and one financial analyst provided information on the prevalence of interconnects. Seven of these stakeholders stated that interconnects exist in most markets, while two of these stakeholders indicated that interconnects are found mostly in medium- to large-size markets. According to an association of MVPD advertising sellers, the number of interconnects has increased in recent years as MVPDs realized their benefits and efficiencies, which we discuss later in this report. According to several MVPD stakeholders, there is no industry-wide definition of an interconnect. Consequently, there are differences in how the term is defined, making it difficult to obtain consistent information on the number of interconnects nationwide. MVPDs, cable associations, and financial analysts we interviewed identified the following key characteristics of interconnects: Advertising time: According to MVPD stakeholders we interviewed, interconnects cover only advertising and do not cover other services. Interconnect managers: According to MVPDs, typically the largest MVPD provider in the market manages the interconnect. For example, one large MVPD told us that it created and manages about one-third of the nation’s interconnects and is a participant in interconnects managed by other MVPDs. This MVPD also said that creation and management of an interconnect requires significant investments in personnel and technology. MVPD stakeholders also told us that some interconnects are managed by national advertising representation firms. Revenue sharing: MVPDs said that revenue from the sale of advertising through an interconnect is generally prorated among the MVPDs in the interconnect based on the amount of inventory provided for sale and their number of subscribers in the DMA. Some stakeholders that filed comments in FCC proceedings and who we interviewed stated that local stations and MVPDs benefit economically from JSAs and interconnects, respectively. However, some stakeholders expressed concerns with how these agreements may impact local markets. Stakeholders we selected to interview included local station owners, MVPDs, industry associations, public-interest groups, academics, and financial analysts. According to most of the local station owners that commented in FCC’s JSA rulemaking and all that we interviewed, a primary benefit of JSAs, as well as other sharing agreements, is that they allow local stations to cut costs. As previously discussed, a JSA is generally accompanied by a shared service agreement, and they are sometimes the same agreement. Further, in discussing their views, a few station owners told us that while JSAs provide some cost savings, shared service agreements provide most of the savings and that it generally did not make sense to talk about JSAs without also discussing shared service agreements. Consequently, our discussion of stakeholders’ views is based on comments about the use of both JSAs and shared service agreements, which we refer to together as “sharing agreements.” All of the station owners we interviewed said that the savings from these sharing agreements help financially struggling stations survive when they might otherwise go out of business, with some owners saying this is particularly the case with stations in smaller markets. Some station owners and financial analysts told us that as local stations are facing increased competition for advertising revenues from other media, such as Internet-based media and MVPDs, stations rely on sharing agreements to cut costs. Our analysis of BIA/Kelsey local advertising revenue data showed that the market share of Internet-based media increased from 11 percent in 2011 to 17 percent in 2014—a percentage comparable to the market share for broadcast television, which was 15 percent in 2014 (up slightly from 13 percent in 2011). This share is higher than the market share of MVPDs, which was about 5 percent in both years. (For more results of our analysis of this market data, see app. II.) Most media industry stakeholders and financial analysts whom we interviewed consistently identified the growth in Internet-based media as a major change in local advertising markets in recent years, coming at a time when market shares for other types of media, such as radio and newspapers, have been relatively flat or in decline. Furthermore, two station owners and two financial analysts told us that stations in smaller markets are more likely to use JSAs than stations in larger markets because local stations earn less advertising revenue in small markets than in large markets, while they said their costs are roughly the same regardless of market size. Our analysis of BIA/Kelsey estimates of local advertising revenue supports this revenue claim. Specifically, stations in the 25 largest DMAs had average revenue of $28.0 million per station in 2014—more than nine times the $3.0 million average revenue per station in the smallest DMAs (those ranked 101 to 210). According to some station owners and financial analysts, stations view JSAs and shared service agreements as a means of remaining financially viable in small markets. In contrast, a local station owner that has stations in larger markets said that there is no need for JSAs in markets like Los Angeles or New York. In addition to cutting costs and helping local stations remain financially viable, station owners also told us that the associated cost savings from sharing agreements enable stations to make investments that help them compete with other local media and provide benefits to the stations and their communities, thereby supporting FCC’s goals of enhancing competition, diversity, and localism. Benefits station owners cited include: Investments in diverse programming: Some station owners filing in FCC’s JSA rulemaking (6 of 18) and whom we interviewed (4 of 10) said that sharing agreements allow them to enhance the diversity of local programming. For example, representatives from Univision, a Spanish-language network and a station owner, told us the company has used JSAs and shared-service agreements with another station owner, Entravision, to establish and expand Univision’s second Spanish-language network, UniMás. Under this arrangement, Entravision provides services for Univision’s stations that carry UniMás programming: According to Univision representatives, the resulting cost savings have allowed it to launch UniMás in six markets, growing the network faster than it could have without sharing agreements. Increased local news coverage: According to some station owners that filed comments in FCC’s JSA rulemaking and that we interviewed, producing local news is expensive and stations find it financially challenging to produce local news, particularly in smaller markets. Most station owners that filed FCC comments (10 of 18) and most that we interviewed (7 of 10) said JSAs and shared service agreements allow local stations to air local news when they would otherwise be unable to or to expand or improve their existing news coverage. For example, one owner of a small-market station told us its JSA and shared service agreement with a larger station owner in the same market have allowed both stations to share resources, thereby reducing costs and improving their news services. Improved services: Some station owners that filed FCC comments (8 of 18) and that we interviewed (4 of 10) said JSAs enable stations to improve service quality. For example, one station owner told us that savings associated with its JSA allowed the station to upgrade its broadcast to high definition, which helped the station better compete for advertising dollars, since many advertisers will not buy advertising unless it is in high definition. FCC itself has acknowledged that JSAs may have benefits. Specifically, in the order FCC released with its 2014 JSA rules, FCC stated that cooperation among local stations may have public-interest benefits under some circumstances, particularly in small to mid-sized markets. FCC also stated that JSAs may, for example, facilitate cost savings and efficiencies that could enable the stations to provide more locally oriented programming. Conversely, some stakeholders raised concerns about how JSAs and shared service agreements may affect local markets. For example, one of the four public-interest groups that filed comments in FCC’s JSA rulemaking, as well as the two public-interest groups and one of the two academic stakeholders we interviewed, said that JSAs do not support the long-standing policy goals to encourage competition, diversity, and localism. Stakeholders raised concerns about JSAs in the following areas: Undue influence: One MVPD association, one labor union, and two public interest groups that filed comments in FCC’s JSA rulemaking and one MVPD, both public-interest groups, and both academic stakeholders we interviewed said that JSAs and other sharing agreements create the potential for undue influence over station operations. Specifically, according to some of these stakeholders, such influence could occur because the agreements create a financial interest. With JSAs, this is because the sales-agent station sells the customer station’s advertising, which is a principal source of the customer station’s revenue. Furthermore, one MVPD, two public- interest groups, and two labor unions that filed FCC comments, as well as both of the public-interest groups and one of the two academic stakeholders we interviewed, said JSAs and other sharing agreements allow station owners to circumvent FCC’s media ownership rules. As previously discussed, FCC’s ownership rules limit the number of local stations an entity can control in a local market. Reduced competition for advertising: According to most public-interest groups (three of four) that filed FCC comments and two of the ten MVPDs, both of the public-interest groups, and one of the two academic stakeholders we interviewed, local stations’ use of JSAs effectively reduces competition for advertising dollars in local markets because, for example, stations within a JSA may combine their sales forces and no longer compete with each other for advertising revenue. Some of these stakeholders raised the concern that this reduced competition may create negative impacts in the market, such as allowing stations with JSAs to capture more of the local advertising market, putting other stations without JSAs at a disadvantage. Reduced diversity: Both public-interest groups and one of the academic stakeholders we interviewed said that sharing agreements reduce diversity in local markets, including diversity in terms of programming or station ownership. For example, an academic stakeholder told us that the use of such agreements results in the same entity effectively controlling the content of one or more stations. Reduced localism: According to one of the four public-interest groups and the one academic stakeholder that filed FCC comments, as well as both of the academic stakeholders we interviewed, the use of sharing agreements can lead to the reduced provision of local news. For example, an academic stakeholder said in its FCC filing that JSAs and shared service agreements have negatively impacted the Syracuse market, because two stations consolidated operations under these agreements and in the process cut one of the station’s news operations. According to 9 of the 10 MVPD stakeholders we interviewed, a primary benefit of an interconnect is to aggregate the available advertising time among various MVPDs in a local market. This enables MVPDs to collectively reach a greater number of households in that market than any single MVPD could reach with its advertising time. Six of these MVPDs said this increased reach enables MVPDs participating in interconnects to better compete with other local media, particularly local broadcast television stations, since it enables them to have a market reach that is closer to that of a local station. Further, four MVPDs also told us that this increased reach makes MVPDs’ advertising time more valuable to advertisers. According to some MVPD stakeholders, without interconnects, the reach of each MVPD’s advertising time would include only that MVPD’s subscribers, which could be a small percentage of households in the local market. As a result, according to three MVPDs, sometimes advertisers would not purchase MVPD advertising time. Nine of the ten MVPD stakeholders said that aggregating advertising inventory through interconnects also enhances the efficiency of advertising sales by enabling advertisers to buy advertising time across a number of MVPDs in a given local market through a single purchase. MVPD stakeholders we interviewed also noted that interconnects can reduce costs. Two MVPDs we interviewed and two larger MVPDs that filed comments in FCC’s Comcast/Time Warner Cable merger proceeding said interconnects allow some MVPDs to cut costs because one MVPD manages the advertising sales and technological implementation of the interconnect for all of the participating MVPDs, whereas without an interconnect, each MVPD would maintain a sales staff. In contrast, smaller MVPD stakeholders that commented in the Comcast/Time Warner merger proceeding and some stakeholders we interviewed raised concerns about interconnects. Specifically, 6 of the10 station owners we interviewed told us MVPDs have an unfair competitive advantage over local stations because, for example, FCC regulates station owners’ JSAs but not MVPDs’ interconnects. Four of these station owners noted that MVPDs are therefore allowed to take advantage of efficiencies and savings through their own type of advertising sales agreement, while local stations face regulatory constraints in doing so. Additionally, five small MVPD stakeholders and an advertising representation firm that works with small MVPDs that commented in the Comcast/Time Warner merger proceeding said that larger MVPDs that manage interconnects treat smaller MVPDs unfairly—or have the potential to—by applying conditions to the smaller MVPDs’ participation in interconnects, such as excluding some smaller MVPDs from interconnects if the smaller MVPDs use an advertising representation firm that competes with the large MVPD’s national advertising arm. Four small MVPDs and an MVPD advertising representation firm that submitted comments in the Comcast/Time Warner merger proceeding said that excluding MVPDs from interconnects decreases revenue for the excluded MVPDs. Two larger MVPDs that provided comments in the merger proceeding, however, indicated that they do not engage in such practices. While opinion differ on how JSAs among local stations and interconnects among MVPDs affect the media landscape, FCC has defined JSAs and required that they be placed in local stations’ public inspection files. Moreover, in 2014, FCC issued a rule that requires that where a JSA encompasses more than 15 percent of another station’s weekly advertising time, the JSA will count toward the local-station ownership limit. FCC requires that each broadcast television station with a JSA file the JSA in its public inspection file, including in the station’s online file on FCC’s website—regardless of whether the station is the sales-agent station or the customer station. This requirement is intended to improve the transparency of local stations’ operations so that the public can have a more active role in assessing stations’ operations in their local markets. However, we found that a considerable number of JSAs filed by customer stations were not also filed by a sales-agent station—and that FCC has not taken sufficient steps to determine the extent to which broadcast television stations are complying with this rule. If stations that use JSAs as part of their advertising operations have neglected to file or update their JSAs in their public inspection files, interested parties may be unaware that the stations have such arrangements and therefore lack insight into this aspect of local television operations. Consequently, the transparency of local television markets is diminished, preventing the public from effectively assessing and engaging stations with regard to local stations’ public interest obligations. We recommend that the Chairman of FCC review JSAs filed in stations’ public inspection files to identify stations involved in those JSAs and take action to ensure that each station involved has filed its JSA as required. We provided a draft of this report to FCC for review and comment. We received written comments from FCC, which are reproduced in appendix III. In response to our recommendation, FCC stated that it shares our concern that potential noncompliance with FCC’s JSA filing requirement could affect the transparency of local television markets. Further, FCC stated it will take action to help ensure that broadcasters are aware of and in compliance with their public file obligations regarding JSAs and that any noncompliance is disclosed to FCC, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the FCC Chairman, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives for this report were to examine (1) what available information indicates about the prevalence and characteristics of advertising sales agreements among local broadcast television stations (local stations) or multichannel video programming distributors (MVPD), and (2) selected stakeholders’ perspectives on the impacts of advertising sales agreements among local stations or MVPDs. To determine the prevalence and characteristics of advertising sales agreements—specifically joint sales agreements (JSA)—among local broadcast television stations (local stations), we obtained and analyzed all JSAs found in the stations’ public inspection files on the Federal Communications Commission’s (FCC) website. We identified documents as JSAs if they were labeled as such or if they were a shared service agreement that included provisions for the joint-sale of advertising. We excluded duplicate copies of the same JSA in our analysis, a JSA that we determined had expired according to the date in the JSA, and documents filed in a JSA folder that were not JSAs, such as local marketing agreements and shared service agreements that did not have an advertising component. According to FCC officials, although stations are required to place copies of their JSAs in their public inspection files, FCC officials have not independently verified whether each station has done so, and the officials said that they were not aware of any JSAs that were mislabeled or misfiled. To provide assurance our review of JSAs was as comprehensive as possible, we also purchased JSA data from BIA/Kelsey, a media research and consulting firm. BIA/Kelsey developed its JSA data by reviewing information in the trade press, analyzing FCC filings, and through direct contact with television stations to ask for information such as the presence of JSAs. We compared BIA/Kelsey’s data against the JSAs we were able to identify in stations’ public inspection files and did not identify any additional JSAs in this data that were not available in the file. We assessed the reliability of using BIA/Kelsey’s JSA data for the purpose outlined here by obtaining information from BIA/Kelsey about how the data were collected and maintained and determined that the data were sufficiently reliable for this purpose. We analyzed the JSAs we obtained from FCC to determine their characteristics such as which stations and owners were party to the agreements, their date and duration, and advertising sales provisions. We also analyzed the filings of JSAs in stations’ public inspection folders to identify if any JSAs were filed in one station’s folder but missing from the folder of another station involved in the JSA. Where JSAs did not mention a specific sales-agent station, we identified the probable station by reviewing publicly available information about television station owners or contacting another station involved in the JSA and examined the public inspection files of those stations. We evaluated FCC’s efforts to ensure completeness of stations’ JSA filings based on FCC’s rules and stated expectations for stations’ public files and federal internal control standards related to information and communications. To determine the prevalence and characteristics of advertising sales agreements—specifically interconnects—among MVPDs, we interviewed selected stakeholders (as listed later in this section) about their knowledge of the prevalence and characteristics of interconnects. We attempted to obtain data on the number of interconnects in the United States from various media industry sources; however, we were unable to establish the reliability of these data due to differences in the methodologies between the various sources that made the numbers inconsistent. To assess selected stakeholders’ perspectives on advertising sales agreements among local stations and MVPDs, we reviewed filings in two FCC proceedings: 1) the proceeding for FCC’s rulemaking on television joint sales agreements and 2) FCC’s review of a 2014 proposed merger between Comcast and Time Warner Cable. We analyzed filings that stated specific benefits or concerns related to JSAs or interconnects. We also interviewed FCC officials and the following stakeholders about their perspectives on JSAs and interconnects: eight local station owners that we selected to represent companies of various sizes and those that do and do not have JSAs; five MVPDs selected to represent cable, satellite, and telecommunications providers and two companies selling advertising on their behalf; five media industry associations selected because they represent broadcasters, large and small MVPDs, and advertising sellers; two public interest groups and two academic stakeholders selected because they filed comments in the FCC JSA rulemaking or were recommended by other stakeholders; and five financial analysts selected based on our prior work and our research on their backgrounds. Table 4 is the list of stakeholders we interviewed. For contextual information about the advertising revenue and market shares of local media, we purchased data from BIA/Kelsey on the estimated local advertising revenue of 12 types of local media: broadcast television stations, broadcast radio stations, MVPDs, newspapers, magazines, direct mail, out-of-home (a category of advertising that includes billboards and other signs in public places), yellow pages, online (i.e., websites), mobile, email, and Internet yellow pages. Since 2009, BIA/Kelsey has released nationwide forecasts for local media advertising with estimates of local advertising for these 12 media categories. BIA/Kelsey allocated its national estimates to each of the 210 Nielsen- defined local television markets, known as “designated market areas” (DMA), based on county-by-county demographic and economic data and BIA/Kelsey’s internal estimates on various media. BIA/Kelsey checked its estimates with publicly available information on many of the public companies that are part of its media categories. BIA/Kelsey stated that the resulting data should be considered as approximate estimates to provide a general view of local advertising markets and changes in those markets. We obtained these data for each of the 210 DMAs for years 2011 and 2014. We chose these years because 2014 would be the most recent year of complete data and 2011 would provide a comparison year during the economic recovery. Prior to purchasing the data from BIA/Kelsey, we researched potential sources of such data by interviewing stakeholders and reviewing our prior work on media ownership. We solicited proposals from companies that we identified as potentially having the data we needed and evaluated these proposals to determine which would meet our requirements. We assessed the reliability of BIA/Kelsey’s data for the purpose of providing contextual information about local media market shares by discussing these data with industry stakeholders and obtaining information from BIA/Kelsey about how they collect and maintain the data. We determined that the data were sufficiently reliable for this purpose. As previously discussed, several broadcast television entities have filed an ongoing lawsuit against FCC over the 2014 JSA rule. This lawsuit alleges that FCC evaded its legal obligations by not completing its review of its media ownership rules and that FCC violated its statutory obligations by promulgating the JSA rule on the basis of these ownership rules. Due to this lawsuit, we limited the scope of our review. Specifically, we did not evaluate FCC’s efforts related to the JSA rulemaking, nor did we evaluate FCC’s efforts to review its media ownership rules. We conducted this performance audit from April 2015 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Revenue from the sale of advertising is earned by a variety of types of local media. Local media advertising generated approximately $136 billion in revenue in 2014, a slight decrease from the $139 billion (2014 dollars) in 2011, according to data from BIA/Kelsey, a media research and consulting firm. The market for local advertising revenue includes a number of different types of media, such as local television and out-of- home venues (which encompasses billboards and ads on buses, among other things). The percentage of local advertising revenue that goes to each type of media is referred to as its market share. Recent changes in the local media landscape have led to some shifts in market share. We obtained data on local advertising revenue from BIA/Kelsey for 2011 and 2014 for 12 types of local media: broadcast television stations, broadcast radio stations, MVPDs, newspapers, magazines, direct mail, out-of-home, yellow pages, online (i.e., websites), mobile, email, and Internet yellow pages. We analyzed these data to identify differences in the local advertising market shares among these various types of local media and how these market shares may have changed in recent years. According to our analysis of these data, the largest sellers of local advertising in 2014 were direct mail, broadcast television, and newspapers, which each have a market share of about 15 percent or more, based on estimates of their local advertising revenue across all U.S. local media markets. In contrast, MVPDs’ market share was about 5 percent in 2014, according to the BIA/Kelsey data. When the market shares of various Internet-based media (mobile, online, email, and Internet yellow pages) are combined, their market share (17 percent) rivals that of the largest local advertising sellers (see fig. 2). Our analysis also revealed some trends in the market shares of these media when comparing estimates for 2011 and 2014. The most significant change in market share in these recent years is among Internet-based media, with a market-share increase from 11 percent in 2011 to 17 percent in 2014 (see fig. 3). Most stakeholders we interviewed similarly identified Internet-based or “digital” media as having significant market growth during this time. Broadcast television’s market share grew slightly between 2011 and 2014, and MVPDs’ market share was relatively flat between 2011 and 2014, according to the data. Print media (newspapers, magazines, direct mail, yellow pages) and radio all saw market-share declines from 2011 to 2014, according to the data, and two of the financial analysts told us that television market share has flattened in recent years and may soon decline. According to many stakeholders, advertisers shifting their business to Internet-based media accounts for changes in market shares, particularly the declines among print media. Mark L. Goldstein, (202) 512-2834 or goldsteinm@gao.gov. In addition to the contact named above, Alwynne Wilbur (Assistant Director), Amy Abramowitz, Melissa Bodeau, Michael Clements, Leia Dickerson, Andrew Huddleston, Crystal Huggins, Hannah Laufe, Meredith Lilley, Grant Mallie, Malika Rice, Kelly Rubin, and Larry Thomas made key contributions to this report.
Television stations, which provide free, over-the-air programming, and MVPDs, which provide subscription television services, compete with other local media for advertising revenue. FCC rules limit the number of local stations an entity can own in one market to promote competition and other public interests. Some station owners created joint sales agreements to potentially cut costs. In 2014, finding that such agreements confer influence akin to ownership, FCC adopted rules that require that where such agreements encompass more than 15 percent of the weekly advertising time of another station, they will count toward FCC's ownership limits. MVPDs also have arrangements (“interconnects”) for jointly selling advertising in a local market. GAO was asked to examine the role of advertising agreements in local media markets. This report examines (1) the prevalence and characteristics of such agreements, and (2) stakeholders' perspectives on these agreements. GAO examined publicly available joint sales agreements and interviewed FCC officials and media, public interest, academic, and financial stakeholders about their views. Stakeholders were selected to represent a range of companies and from those who submitted comments on FCC's rules, among other reasons. Agreements among station owners allowing stations to jointly sell advertising—known as “joint sales agreements”—are mostly in smaller markets and include provisions such as the amount of advertising time sold and how stations share revenue. Some of these agreements also included provisions typical of other types of sharing agreements. The Federal Communications Commission (FCC) requires each station involved in a joint sales agreement to file the agreement in the station's public inspection file. According to FCC, these files are meant to provide the public increased transparency about the operation of local stations and encourage public participation in ensuring that stations serve the public interest. GAO reviewed all joint sales agreements found in stations' public files and identified 86 such agreements among stations. GAO also found inconsistencies in the filing of these agreements. Specifically, 25 of these agreements were filed by one station but not by others involved in the agreements. FCC addresses compliance with this filing requirement through its periodic reviews of station licensing and in response to complaints. However, FCC officials said neither of these approaches has identified agreements that should be filed but have not been, and FCC has not reviewed the completeness of stations' joint sales agreement filings. If stations with joint sales agreements are not filing these agreements as required, a member of the public reviewing such a station's public file would not see in the file that the station's advertising sales involve joint sales with another station. Most multichannel video programming distributor (MVPD) stakeholders GAO interviewed said that interconnects exist in most markets. These arrangements allow an advertiser to purchase advertising from a single point to be simultaneously distributed to all MVPDs in a local market participating in the interconnect. Stakeholders GAO interviewed—including station owners, MVPDs, media industry associations, and financial analysts—said that joint sales agreements and interconnects can provide economic benefits for television stations and MVPDs, respectively. Joint sales agreements allow stations to cut advertising costs, since one station generally performs this role for both stations. For example, some station owners said they used the savings from joint sales agreements and other service-sharing agreements to invest in and improve local programming. Some selected station owners and financial analysts said that stations in smaller markets are more likely to use joint sales agreements because stations in smaller markets receive less advertising revenue while having similar costs as stations in larger markets. Other stakeholders, including public-interest groups and academics, raised concerns about how these agreements may negatively affect local markets. For example, some public-interest groups said that using these agreements reduces competition in the local market and allows broadcasters to circumvent FCC's ownership rules. MVPDs stated that interconnects allow MVPDs to better compete with broadcasters for local advertising revenue by increasing the potential reach of an advertisement to subscribers of MVPDs participating in the interconnect. Some small MVPDs raised concerns that large MVPDs that manage interconnects may impose unfair terms as a condition of their participation in the interconnect. However, large MVPDs said they do not engage in such practices. FCC should review joint sales agreements filed in stations' public files to identify missing agreements and take action to ensure the files are complete. FCC said it would take action to ensure compliance with its public file requirement.
Each of the 37 banks (12 district and 25 branches) in the Federal Reserve System prepares monthly currency activity reports, known as the FR 160 reports. The monthly currency activity reports are transmitted to the Board of Governors to document movement of currency through the banks and to summarize the total currency on hand in the respective banks’ vaults. These reports and the underlying systems that they are generated from constitute the Federal Reserve’s only detailed records of currency transactions throughout the Federal Reserve System and the respective ending balances by denomination. Thus, information on monthly currency movement in and out of Federal Reserve Banks provided to Federal Reserve management (including the Board of Governors), the Congress, and other external users of this information would be based on data from the monthly currency activity reports. According to the Board of Governors, the uses of this report are four-fold: “to provide an inventory of collateralized Federal Reserve notes, to monitor payout patterns, to assess the currency stock needs of the various districts, and to generate a variety of ongoing and ad hoc reports for the Board, Reserve Banks, other government entities, and the public.” In addition, each Federal Reserve Bank (FRB) prepares a daily balance sheet (the FR 34 report) that shows all of the assets, liabilities, and equity for the bank. In particular, the daily balance sheet shows the balance of currency in the respective bank’s vault at the end of each day. At the end of each month, to ensure agreement, the reported vault cash balance in the last daily balance sheet of the month is compared to the ending balance reported in the month-end currency activity reports. The L.A. Bank manages over $80 billion a year in currency, second only to the New York FRB. The L.A. Bank uses an electronic cash inventory system to manage this currency, but not every FRB uses the same system or even an electronic one. A Board of Governors official stated that the Philadelphia and Atlanta Federal Reserve District Banks, including their respective branch banks, also use the same cash inventory system as the San Francisco District Bank and its branches, including the L.A. Bank. The official stated that the New York and Dallas District Banks have other electronic information systems to account for their detailed cash transactions. Board officials also said that systems in the Kansas City, Minneapolis, Chicago, Cleveland, and Richmond District Banks are housed in a personal computer-based local area network. The two remaining district banks, Boston and St. Louis, manually account for these transactions and inventory of cash on hand. The objectives of our review at the L.A. Bank were to determine the nature of the problems that may have occurred in reporting currency activity for Federal Reserve note receipts, payments, and amount on hand and review and comment on corrective actions planned or taken by the Federal Reserve to resolve those problems. We conducted our review in three parts. First, we examined the use and preparation of the monthly currency activity report. To accomplish this, we (1) reviewed the L.A. Branch and San Francisco District Bank’s policies and procedures for preparing this report, (2) met with officials at the Board of Governors and examined official policies to determine the uses of the report, and (3) interviewed analysts and managers at the L.A. Bank to determine how staff were told to prepare the currency activity report and what controls were in place to ensure that the numbers reported were accurate. In the second part of our review—determining the nature of the reporting problems—we attempted to perform a comprehensive assessment of the L.A. Bank’s accounting practices and internal controls over currency. However, our efforts were restricted by the lack of readily available historical data maintained by the L.A. Bank. For example, L.A. Bank officials stated that they could not readily provide the detailed general ledger transactions that had been recorded for the currency in their account. L.A. Bank officials stated that the information was not stored in a format that would allow for detailed analysis of transactions and that conversion to such a format would take a significant amount of time. For 6 judgmentally selected days in the October through December 1995 period, we attempted to perform limited reviews of the L.A. Bank’s reconciliations. These reconciliations compare the Bank’s general ledger balances (which are used to prepare the daily balance sheet) to its cash inventory system (which contains the physical inventory file). However, our efforts to perform limited reviews of these 6 days were hindered because the L.A. Bank could not locate some of the requested data. For instance, the L.A. Bank could not locate the report containing the ending balance of the amount of currency in the vault as reported in its cash inventory system for one of the days selected in our review. To enhance our understanding of the Bank’s reconciliation process, we also did a walkthrough of 1-day’s reconciliation efforts with bank employees in June 1996. In addition, for October through December 1995, we examined transactions in general ledger accounts that were used to account for reconciling differences found that were either written off or were temporarily held aside for further research and disposition. We gathered information on Bank procedures for resolving out-of-balance situations and differences between amounts reported and actually received from banks. Because the L.A. Bank could not provide the general ledger transaction history for its cash accounts, we could not determine whether the accounts and activity provided to us by the Bank represented the universe of cash activity. Thus, we only tested the transactions provided to us. Further, we did not perform a review of (1) the L.A. Bank’s computer security controls for preventing unauthorized access to its general ledger and cash inventory system or (2) its physical access controls for ensuring that the money it manages is protected from theft and misappropriation. In the third part of our review, we interviewed Bank officials and reviewed the new procedures for preparing the currency activity reports and the revised reports to determine if their efforts to comply with their policy for preparing these reports were effective in resolving the problems identified. We conducted our work at the Federal Reserve Bank in San Francisco and its branch bank in Los Angeles between June 1996 and August 1996 in accordance with generally accepted government auditing standards. The monthly currency activity reports are required to be prepared in accordance with guidance in the Board of Governors’ Technical Memorandum No. 91 “Processing Procedures for the CASH Series.” This guidance states that the calculated ending balance in the monthly currency activity report should be compared to the reported end-of-the-month balance for cash in the vault on the Bank’s daily balance sheet and that corrective actions should be taken to resolve any substantial differences. This guidance also underscores that, if requested, explanations must be provided for any differences (other than rounding) between the month-end balance sheet amount and the ending balance on the currency activity report for cash in the vault. This guidance does not state how the amounts reported in the currency activity report are to be determined. However, to complete the report in a meaningful way, each reported amount, except the ending balance for cash in the vault, which is calculated as noted above, would need to be independently determined. Table 1 shows excerpts from the L.A. Bank’s spreadsheet used to prepare the monthly currency activity report for December 1995. Table 2 provides excerpts from the revised spreadsheet on currency activity for December 1995—the revision was not transmitted to the Board of Governors. As noted on page 9, inaccuracies in amounts reported on the monthly currency activity reports for the fourth quarter of 1995 were discovered during a compliance review. As a result of this review, the revised spreadsheet was developed by the L.A. Bank. As can be seen from comparing the L.A. Bank’s spreadsheet for the filed currency activity report for December 1995 (table 1) to the revised spreadsheet (table 2), the forced amounts for receipts from circulation changed after the L.A. Bank conducted a compliance review and identified inaccuracies. The practice of forcing the receipts from circulation amount in the monthly currency activity report, as opposed to independently determining the amount, is not consistent with the Board of Governors’ guidance for validating the accuracy of the currency activity report. Federal Reserve officials stated that it is a common practice for FRBs to adjust the receipts from circulation line to balance the monthly currency activity report ending total to the balance sheet if it is within the plus or minus $3 million tolerance established by the Board of Governors. However, at least for October through December 1995, the L.A. Bank did not determine that the forced amount was within the $3 million tolerance. Forcing receipts from circulation allowed errors to occur that were neither reported nor explained. In fact, this practice obscures any other differences that might exist between the two reports. Had these amounts been determined using appropriate procedures, the ending balance of cash on hand, which is intended to be the calculated amount in the monthly currency activity report, would have been at variance with the daily balance sheet. Consequently, the differences would have been researched and corrected or explained. L.A. Bank internal correspondence confirmed that the Bank’s problems preparing and reporting the monthly currency activity report were initially found by an analyst who was responsible for preparing the report. The analyst stated that queries made from the cash inventory system to identify receipts from circulation for the report showed substantial differences from the amount that was forced in the report. Bank management officials in the L.A. Bank and its San Francisco district bank confirmed that analysts were instructed to force the amount in the report for receipts from circulation and that this practice had been in place for several years. L.A. Bank officials stated that the discrepancies reported in the monthly currency activity reports for the fourth quarter of 1995 were brought to their attention as a result of a planned compliance review. They stated that through the review, performed under the direction of Bank management and completed in January 1996, the compliance analyst discovered and communicated to Bank management that incorrect amounts appeared to be reported on the monthly currency activity reports for the fourth quarter of 1995. As part of this review, the compliance analyst found that the preparer of the reports had identified discrepancies between the preparer’s efforts to independently calculate receipts from circulation and the forced amount. Using data obtained through queries to the cash inventory system combined with manual records, the compliance analyst initially recalculated the receipts from circulation. The analyst determined that receipts from circulation in October should have been $5.8 million more than what was originally reported; in November, $61.8 million less; and, in December, $111 million more. In addition to the errors identified for receipts from circulation, we confirmed that other errors had been obscured as a result of the L.A. Bank’s practice of forcing the receipts from circulation. The December 1995 currency activity report contained errors in the aggregate of about $121 million, which resulted in the above noted $111 million understatement in reported receipts from circulation. Specifically, it failed to include $96 million shipped to the branch bank in Seattle because a clerk did not include the transaction on the manual log used to record shipments of currency between FRBs. Another $5 million received from the New York FRB was excluded from the report because it too was not in the manual log. The report preparer also did not include most of the manual transactions for the month, which was about $18 million paid into circulation. Finally, the report preparer incorrectly included about $2 million in coin receipts on the currency report. The October 1995 currency activity report had a $2.7 million error. While preparing the report, the report preparer mistakenly entered $300,000 for a $3,000,000 amount paid into circulation. This error resulted in understating the amount of currency paid into circulation by $2.7 million, which caused the forced amount received from circulation to be understated. We verified that the L.A. Bank’s subsequent efforts to independently determine receipts from circulation for October through December 1995 showed that the initially filed reports were incorrect. These efforts were known by L.A. Bank officials to be incomplete because they did not account for differences that occur due to the time lag between receipt and processing of currency. Thus, other errors could have existed that were not detected. Due to our time constraints and the resulting limited nature of our work, we did not attempt to determine what the correct amount should have been or if other errors were made. Officials at the L.A. Bank assumed that the data in their general ledger as reported on the daily balance sheet were correct and have therefore focused their efforts on correcting and improving the preparation of the monthly currency activity reports. Officials said that their objectives were to (1) eliminate errors, (2) independently determine the receipts from circulation amount in the currency activity report, and (3) ensure that the currency activity reports’ ending balances equal the daily balance sheets within the Board of Governors’ policy of plus or minus $3 million. L.A. Bank officials stated that they were confident that implementation of their new procedures, as applied in April 1996, would correct the reporting inaccuracies associated with the receipts from circulation line. After the compliance review was completed, branch analysts reviewed several previously issued currency activity reports to identify and research the causes of errors. In an effort to prevent data entry errors and help ensure that all data is included in the receipts from circulation, Bank officials said that they now require supervisory review before the report is transmitted to the Board of Governors. In this process, an L.A. Bank officer and supervisor are to review the reports and the supporting documentation for each line item. L.A. Bank officials stated that important actions were taken to revise a series of queries to the cash inventory system in an attempt to independently determine receipts from circulation. Officials said that the queries of the cash inventory system are now used to collect some of the data that were previously collected from manual logs. For amounts not included in the cash inventory system, the Bank continues to collect the data manually. In addition, the officials stated that queries are used to determine differences that can occur in the Bank’s cash inventory system and general ledger due to the time lag between receipt and processing of money—an important problem the Bank faced in balancing the currency activity report with the daily balance sheet that we discuss in more detail on page 15. The new procedures were used to prepare the April through June 1996 reports. The December 1995 through March 1996 currency activity reports were revised using the new procedures but have not yet been submitted to the Board of Governors. According to L.A. Bank officials, the Board requested that they receive the revised reports all at once and only for those months that had substantive revisions. Bank officials stated that they plan to correct other months that were incorrectly filed back to October 1995 in both the revised reports and the reports that were prepared using the new procedures. Even so, the Bank has not precisely summarized currency activity. The L.A. Bank should have done so using the new procedures because these procedures state that receipts from circulation should be independently determined using the same sources that post to the general ledger. To make the monthly currency activity report balance with the daily balance sheet, the amount reported as received from circulation was reduced by $307,600 for January; reduced by $190,600 for February; reduced by $189,000 for March; increased by $2,074,000 for April; increased by $29,000 for May; and increased by $24,000 for June. Each of these adjustments were within the Bank’s $3 million tolerance for error for months in 1996. According to a Board official, the $3 million tolerance was established to facilitate timely reporting to the Board of Governors. Despite the fact that these numbers fall within the Board of Governors’ policy that allows for a $3 million tolerance for error, the unexplained differences raise the concern that either the queries to summarize inventory activity are still inaccurate or that there are more fundamental problems that need to be addressed. Without the historical general ledger data, we were unable to do the work necessary to develop an opinion on that matter. Two other changes have been introduced to improve preparation of currency activity reports. First, officials said that they plan to prepare the currency activity reports on a daily and weekly basis so that if errors are identified, it will be easier to research the cause of the problem. In addition, officials reported that they plan to create a new position—reports clerk—to specialize in the preparation of this and other reports. The L.A. Bank prepares the monthly currency activity report primarily using its Cash Automation System files (its cash inventory system, which provides a perpetual inventory file that tracks currency by denomination); Integrated Accounting System (its general ledger); and manual records (for transactions recorded in the general ledger that are not recorded in the cash inventory system). These cash inventory and general ledger systems interface in that, for the most part, detailed transactions are entered into the cash inventory system and posted to the general ledger. However, they are different because the general ledger posts at a detailed level but not by denomination, while the cash inventory system posts to multiple files, at a summary level, and by denomination. In addition, some transactions handled by the Bank’s cashier—primarily consisting of currency transactions with government entities and L.A. Bank staff—are recorded directly into the general ledger and are not recorded in the cash inventory system. Generally, the cash inventory system consists of multiple stand-alone files that record each type of currency transaction. For example, one stand-alone file for receipts from circulation is called the detailed deposit file and another stand-alone file to record monies disbursed or “paid out” from the vault to depository institutions is called the detailed order file. In addition to these transaction files, the cash inventory system has a stand-alone perpetual inventory file that is supposed to track the balance of currency and coin in the Bank (all money in the Bank is considered for accounting purposes to be in the vault, even though it may not be physically in the main vault) by denomination, in total, and by location within the Bank. This inventory file also tracks increases and decreases to the vault inventory but does not link the increases or decreases to the specific type of transaction that prompted the change. Another key file in the cash inventory system is the cash file that accumulates the detailed transactions processed by the cash inventory system for posting to the general ledger. The accumulated transactions are uploaded periodically—hourly or daily—at a detailed level into the general ledger, without distinction by denomination. The L.A. Bank’s inability to precisely summarize the detailed activity in its cash inventory and manual records, as demonstrated by the problems found in preparing its currency activity reports, raises important concerns. First, data for the currency activity report and the daily balance sheet basically come from the same sources—the detailed cash inventory records of cash transactions and manual records. An inability to balance the two reports without forcing the number for receipts from circulation indicates that there could be problems with the source data in the cash inventory system or the summary information reported in the L.A. Bank’s daily balance sheet. We attempted to perform a comprehensive review of the L.A. Bank’s internal controls and accounting practices over the money flowing through the Bank. Our efforts to perform a comprehensive review were substantially limited by the L.A. Bank’s inability to provide the information needed for the review in a timely manner. While such data availability constraints prevented an in-depth assessment, we performed limited procedures and found other potential data integrity and procedural problems in the L.A. Bank’s efforts to account for and report the money it manages. Based on (1) the size and nature of the L.A. Bank’s operations, which involve managing large sums of money, (2) its inability to accurately summarize its financial records, and (3) the problems found from our performance of limited procedures, we believe a detailed internal control review is needed in the L.A. Bank to provide independent assurance that these assets are properly accounted for and controlled. To perform a comprehensive review of the L.A. Bank’s internal controls and accounting for the money processed through the Bank would have required us to perform extensive audit procedures. To do this, we requested that the Bank provide us with (1) the reconciliations it prepares for its currency accounts and (2) a general ledger history of all of the activity in its general ledger cash accounts for October through December 1995. The L.A. Bank did not provide significant portions of the requested information, and some of the requested documents were still not available at the time we completed our review. Bank officials stated that it would take them over 3 weeks to provide us their general ledger history of cash transactions. According to these officials, all of the Bank’s historical accounting transaction data are stored in such a way that makes retrieving and converting the information into a data format very difficult. Because this information was not readily available, we had to limit our audit approach. To perform our review without a general ledger history of cash transactions is comparable to trying to verify someone’s personal bank account reconciliation without having their checkbook. This information was needed, in part, because of our concern over the L.A. Bank’s inability to precisely summarize the information in its cash inventory system and the limitations in the system’s design that preclude readily linking the detailed transactions in its cash inventory system to the summary postings made to its perpetual inventory file. L.A. Bank officials stated that the Bank’s cash inventory system, by design, does not identify or retain items that are grouped together and posted in summary from the cash inventory system to the inventory file. This design limitation presents two fundamental problems in accounting for currency. First, when the general ledger is out of balance with the cash inventory system, identifying the cause of differences is more difficult because of the inability to readily compare the transactions in the cash inventory system to the transactions in the general ledger. This step would be comparable to comparing the check and deposit activity, item by item, in a person’s checkbook (the general ledger) to the items shown on their bank statement (the cash inventory system). Second, the ability to specifically identify timing differences that occur between the two systems due to the time lag between the receipt and processing of money is also made more difficult for the same reason. This second problem is the main reason that the receipts from circulation amounts are difficult to determine, and Bank officials stated that this contributed to amounts being forced instead of being independently determined from the cash inventory system. In addition to these limitations, our work, which focused on identifying the problems of reporting currency activity and corrective actions taken at the L.A. Bank, did not include two other critical steps that would be needed to provide a comprehensive assessment of the Bank’s accounting and internal controls over currency. These steps are a (1) general electronic data processing review to assess the effectiveness of the computer security controls over access to the Bank’s general ledger and cash inventory systems to ensure that unauthorized access could not occur and go undetected or that such a risk is substantially minimized and (2) detailed review of the effectiveness of the physical safeguarding controls for controlling unauthorized access to the money. Despite these limitations, we were able to perform a limited review of the reconciliations of the L.A. Bank’s currency accounts for 6 judgmentally selected days in the October through December 1995 period and a walkthrough with bank employees for 1 day in June 1996 to enhance our understanding of the Bank’s reconciliation process. As part of that review, we reviewed other management reports that highlighted differences between the data reported in the L.A. Bank’s cash inventory and general ledger systems. Thus, we only reviewed the propriety of differences that FRB analysts identified when they performed their reconciliation. Our efforts focused on assessing the propriety of how differences identified by the L.A. Bank were resolved and disposed. The problems identified in our review follow. On November 28, 1995, the L.A. Bank received a deposit of $432,000 from one depository institution. According to L.A. Bank officials, the depository institution received credit for $8,640,000 instead of the actual $432,000. Bank officials stated that they do not know whether the depository institution sent the wrong notification amount or whether L.A. Bank staff used the wrong notification for comparison. The initial L.A. Bank receiving team that counted the money knew that a $8,208,000 difference existed, but they overrode the system control in the cash inventory system and forwarded the money for further processing. Although this error was corrected when the problem was detected at the end of the day, this resulted in an erroneous entry being made into the L.A. Bank’s general ledger for the $8,640,000 that increased the cash in the vault amount and the depository institution’s account. L.A. Bank officials had no explanation for why this occurred. This error, however, should have been immediately corrected when the difference was identified by the L.A. Bank staff verifying the deposit. This raises concerns about the effectiveness of these physical controls even though the internal control of performing reconciliations at the end of the day worked effectively and found the problem. The internal control to verify the deposit and compare the amount counted to the amount reported by the depository institution identified this difference early in the process and before the wrong amount was recorded in the general ledger. However, the L.A. Bank staff that performed the count did not notify their supervisor and the supervisor did not contact the depository institution at that point. As a result, greater effort was required at the end of the day to resolve the differences. On October 17, 1995, there was a reconciling item that required a correction to the general ledger. This correction was made to increase the general ledger balance by $1,040,000 to make it agree with the balance of the cash inventory system. The correction was to record money returned to the vault that had been ordered by a depository institution but that was not sent to the institution by the end of the day. This transaction raises a number of concerns. The physical movement of individual customer orders for currency cannot be tracked through the L.A. Bank’s cash inventory system because currency associated with numerous orders is tracked in an aggregate amount as it leaves the vault and is processed by teams preparing and shipping orders. For instance, the transfer of funds to the armored carrier for transport to the banks leaves as an aggregate carrier shipment amount, not as a series of single order amounts. As a result, credits and debits to financial institutions and associated entries to the general ledger are made at the end of the day, rather than when they leave the bank. When an order is cancelled, an adjustment must be made to the general ledger. At the L.A. Bank, certain clerks have the ability to delete transactions that would post to the general ledger at the end of the day, from the general ledger, where they will show up as “unposted” transactions. While the clerks are supposed to send a list of unposted transactions to the supervisor and attach documentation, such as cancelled shipping orders, the Bank relies heavily on the clerk to accurately and completely report these transactions. While it is not unusual for a depository institution or armored carrier company to cancel an order, the manner in which these are corrected raises concerns. These corrections do not require documented supervisory approval as would other general ledger adjustments. Instead, through direct intervention on the L.A. Bank’s computer system, certain L.A. Bank staff have the ability to cause an original transaction posted to the general ledger to subsequently be deleted. In addition, we could not find evidence that anyone at the Bank reviewed the general ledger for unposted transactions. Thus, certain staff could make unauthorized adjustments that could go undetected. On December 15, 1995, the L. A. Bank experienced an out-of-balance situation of $120,000 between its cash inventory system and its general ledger. The problem occurred because the cash inventory system assigned the same transaction number to two transactions and would not upload both of these transactions to the general ledger. As a result, one of the transactions did not post to the general ledger. A suspense item was created and the problem was researched. After researching the item, analysts found that a $120,000 deposit had been entered into holdover, taken out of holdover, and then returned to holdover. One Bank official indicated that the underlying cause of the out-of-balance condition was a systemic defect in the cash inventory system that assigns the same transaction numbers to deposits that are placed in holdover twice in the same hour. Once identified, this problem was resolved. Following inquiries by our staff, an L.A. Bank official documented and reported the problem to the FRB in Atlanta, which is responsible for maintaining the cash inventory system. These are examples of the problems we found in our review. The fact that we found problems while only attempting to review the reconciliations for a few days increases our concerns about the Bank’s accounting practices and internal controls over currency. The ultimate responsibility for good internal controls rests with management. Internal controls are an integral part of each system that management uses to regulate and guide its operations. In this sense, internal controls are management controls. Good internal controls are essential to achieving the proper conduct of business with full accountability for the resources made available. They also facilitate the achievement of management objectives by serving as checks and balances against undesired actions. In preventing negative consequences from occurring, internal controls help achieve the positive aims of managers. As discussed previously, our findings concerning the L.A. Bank demonstrate the need for detailed internal control reviews at the L.A. Bank. They also raise concerns about the San Francisco District and its other branches and the other two District banks that use the same cash inventory system as the L.A. Bank—Philadelphia and Atlanta—and their respective branches. Further, they may signal concerns for the remaining banks or banks that use a less sophisticated system—for example, a Board of Governors official stated that two FRB banks account for their detailed cash activity manually. Our report on our audit of the Federal Reserve Bank of Dallas, its three branches, and the Federal Reserve Automation Services (FRAS) identified internal control issues that we considered significant enough to warrant management’s attention. These issues included how (1) the accounting records of the Dallas FRB and its branches are reconciled, reviewed, maintained, and reported, (2) accountability over assets is maintained, and (3) automated systems are utilized by the Dallas FRB and its branches, many of which are controlled by FRAS. Our findings were reported to officials of the Dallas FRB and FRAS, as applicable. In these reports, we provided suggestions for improvements and documented the many corrective actions Dallas FRB and FRAS officials have taken to date. In November 1994, the Board of Governors of the Federal Reserve System contracted for external, independent audits of the combined financial statements of the FRBs for calendar years 1995 through 1999. During these years, the financial statements of each of the FRBs will be audited once. In our recently issued reports, we commended the Board for taking this step and expressed our belief that instituting regular, external independent audits will help enhance accountability over the operations of the Federal Reserve System. Additionally, this step would place the United States on a par with the practices of other central banks, such as those in France, Germany, and the United Kingdom. However, these financial audits will not include an internal control review designed to ensure that currency entrusted to the Federal Reserve Banks is accurately accounted for and controlled. The problems identified in this report raise concerns over the quality of the internal control environment and the accuracy of the accounting for and controlling of money entrusted to the L.A. Bank and may signal problems in the other FRB banks that use this same system. It would be prudent for the Board of Governors to determine whether or not similar situations exist in the remaining FRBs as well. The L.A. Bank has system design problems and procedures that should be improved to ensure a more accurate accounting of and effective control over such a liquid asset. Considering the large sums of money the L.A. Bank is responsible for managing and the problems identified from the limited audit procedures we performed, more detailed reviews of the L.A. Bank’s operations are warranted. Detailed internal control reviews would provide independent assurance that the L.A. Bank has properly accounted for and controlled the money it manages. In this regard, to assure themselves and the public they serve about the integrity over accounting for and controlling the money in their possession, almost every major financial institution in this country has its internal controls scrutinized on a regular basis by its internal and external auditors. Because of the system design problems and lack of discipline we identified in the cash processing operations of the L.A. Bank, we are concerned that the San Francisco District Bank and the other district banks that use the same cash inventory system could be experiencing similar problems. Such determinations were beyond the scope of our work. The FRB needs to consider the results of the detailed internal control reviews we believe are needed at the L.A. Bank in ensuring that cash operations at other banks are appropriately accounting for and controlling cash they are managing. We recommend that the Chairman of the Board of Governors of the Federal Reserve System take the following actions. Require that the management of the Federal Reserve Bank of Los Angeles, working with its internal auditors, perform an immediate internal control assessment of its cash operations and reporting practices, including a review of the underlying systems. Bank management should prepare a report on the results of its assessment, including a written assertion on the effectiveness of its internal controls to ensure that the money it manages is appropriately accounted for, reported, and controlled. Also, as a component of the 1996 audit of the combined financial statements of the Federal Reserve Banks, require that the independent external auditors examine and provide an opinion on management’s assertion about the effectiveness of the internal controls over cash operations at the L.A. Bank. Require that the San Francisco District Bank and the other two District Banks—Philadelphia and Atlanta—that use the same systems as the San Francisco Bank and their branches conduct reviews of their cash inventory systems and reporting practices to determine whether they have problems similar to those identified at the L.A. Bank. For the remaining Federal Reserve Banks, consider conducting internal control assessments to ensure the effectiveness of internal controls over their cash operations. Taking into account the continuing importance of proper controls and accountability for currency, consider conducting annual internal control assessments at all Federal Reserve Banks, including formal reporting by management and independent external auditor examination of management’s assertion regarding the effectiveness of internal controls. To strengthen internal controls and provide for more accurate reporting, re-examine its policy that allows for the currency activity reports to be prepared within a plus or minus $3 million tolerance for accuracy. In commenting on a draft of this report, the FRB did not dispute our conclusions that the monthly currency activity reports for October through December 1995 were prepared incorrectly and that this was done at the direction of the L.A. Bank’s management. The FRB also did not take issue with the fact that the L.A. Bank’s management practice of forcing the numbers in the report to agree was not consistent with the Federal Reserve Board’s policy guidance on how the monthly currency activity reports were to be prepared. The FRB stated that because of the issues raised in our report regarding accounting procedures at its L.A. Bank and our concerns about the integrity of financial accounting at the branch and at other FRBs, it has requested its external auditors to institute a thorough audit of this area. Also, consistent with our recommendations, the FRB stated that it will request its external auditors to examine and provide an opinion regarding the effectiveness of the internal controls over the cash operations at the Philadelphia and Atlanta Reserve Banks, which use the same cash inventory system as the L.A. Bank. We agree with the FRB that such a thorough review is needed. It is critical that during this review, the FRB’s external auditor comprehensively look at and test the internal controls over the banks’ cash operations. This review should ensure that effective preventive and detection controls are in place and operating. Such controls should ensure that approvals, reviews, and other supervisory actions are properly documented when performed. In addition, this review should independently assess, including testing where appropriate, the physical safeguarding and computer security controls as well as the commitment of the respective bank’s management towards instituting an effective internal control environment that requires strict adherence to established FRB policies. The FRB took exception to two major conclusions in our report. First, it does not believe that there is a linkage between the preparation of its monthly currency activity reports and its financial accounting records. It stated that “. . . these reports are used for informational purposes only and are quite distinct from the financial accounting records of the bank.” In addition, after noting that we concluded that such a linkage does exist, the FRB stated that “. . . GAO did not review the accuracy of the Branch’s financial accounting records and provides no substantiation for this assertion.” We disagree with the FRB’s statement that no linkage exists between the information in the monthly currency activity reports and its financial accounting records. We found that the cash inventory records, which make up the FRB’s cash inventory system, were used to prepare the monthly currency activity reports we reviewed. The cash inventory records were updated with the same information used to update the L.A. Bank’s financial accounting records. In attachment 2, page 9, of its comments on our report, the FRB describes this linkage in stating that “the data maintained in CAS (Cash Automation System), together with certain manual transactions, are used in three distinct ways: as a record of inventory for currency and coin (Inventory Files); as financial accounting records affecting depository institutions; and as a source of statistical information... (transaction/statistical files).” In its comments, the FRB refers to CAS as the Bank’s cash inventory system and a source of statistical information. This is consistent with what we found. The monthly currency activity reports and the L.A. Bank’s financial accounting records are prepared from the same source information—its cash inventory system—and are thereby linked. Also, the FRB stated in its comments that daily reconciliations of its financial records to its cash inventory system are performed. However, the L.A. Bank was unable to make the two agree on a monthly basis for the period we reviewed and, therefore, forced the numbers on its monthly currency activity reports. This calls into question the effectiveness and/or completeness of the Bank’s daily reconciliation procedures. If daily reconciliations of this information are performed, the monthly process should require nothing more than adding the daily activity together. In an effort to show that its financial accounting records were correct, the FRB stated that on September 6, 1996, it performed a 100-percent cash inventory count of the L.A. Bank’s cash holdings and concluded that the branch’s balance sheet accurately reflected its currency and cash holdings. The FRB further stated that its internal financial examiners and internal auditors performed several internal reviews of its cash operations that determined that its internal controls were effective. The FRB asserted that these reviews were done in accordance with generally accepted auditing standards. Performing a periodic physical inventory, as the FRB did on September 6, 1996, is a good internal control but doing so and the ensuing results are not directly relevant to the concerns identified in our report. A physical inventory count shows what was in the bank the day the count took place; in this case, almost a year after the October through December 1995 period covered by our review. Also, our review did not have as its objective and was not designed to address whether there were cash shortages at the L.A. Bank. We, however, identified serious internal control and reporting problems and the L.A. Bank’s inability to precisely account for the currency flowing through the Bank from month to month. With respect to the two internal reviews cited by the FRB in its comment letter, the review reports had not been finalized at the time of our review and, according to an FRB official, would not be released in time for this report. As a result, we cannot comment on the scope, findings, conclusion, nor quality of the work performed by the FRB’s internal examiners and internal auditors for these reports. Also, the FRB incorrectly asserted that these reviews were done in accordance with generally accepted auditing standards (GAAS). The work done does not meet the independence standards of GAAS applicable to external auditors. Thus, while the financial examiners’ and internal auditors’ work may be considered independent for purposes of reporting to management, it should not be relied upon by external auditors. Under professional audit standards, we would have to review the internal financial examiners’ and internal auditors’ work in order to comment on their findings, scope of work, or audit quality. The second major conclusion the FRB took exception with in commenting on a draft of this report was our recommendation that the Board reconsider its policy that allows for a $3 million tolerance for errors in preparing the monthly currency activity reports. The Board reiterated its view that the reports are for informational purposes only and that this level of precision is sufficient for the purposes that the reports are used. Further, it stated that the cost associated with achieving such precision far outweighed the benefit that would be derived from achieving it. First, for broad informational purposes, such as calculating the money supply or monitoring payout patterns, the level of precision afforded by the $3 million tolerance would seem acceptable. We are not questioning this. Our concern is that the monthly currency activity reports are prepared from the Federal Reserve’s accounting records, which brings into question the acceptability of any tolerance level. Further, in our report, we express our concern over the L.A. Bank’s inability to precisely account for currency activity from its cash inventory records when it attempted to do so. Even after L. A. Bank officials spent several months developing procedures to attempt to accurately account for this information, the inventory records did not agree with the general ledger. This means that either the L.A. Bank’s new procedures for summarizing the activity in its cash inventory system were still flawed or that its financial records may be incorrect. This raises further concerns about the integrity of the internal control environment. The fact that the FRB asserts that it performs daily reconciliations of this information and cannot readily and precisely account for this activity also raises concerns. We reaffirm our recommendation that because of the linkage to the accounting records, the Federal Reserve reconsider its $3 million tolerance for accuracy. In addition to the two major exceptions it took with our report, the FRB also asserted that the reviews conducted by its internal financial examiners and internal auditors concluded that (1) the reporting errors made by the L.A. branch have not affected the integrity of the Federal Reserve’s financial statements, (2) these errors have not affected the Federal Reserve’s calculation of the money supply, its conduct of monetary policy, or the amount of shipments of currency and coin to or from the branch, and (3) no money has been lost due to these errors, and no key decision-making has been compromised. These matters were not within the scope of our review and our report does not make any conclusions about any of them. As previously stated, the internal reviews cited by the FRB had not been completed and, therefore, not available for our review. Instead, our report focuses on the serious internal control problems found in the L.A. Bank. Notwithstanding our primary focus, we remain concerned that, until the L.A. Bank can resolve why it cannot reconcile the activity in its cash inventory records with the general ledger, it does not know and cannot be certain of the accuracy of its financial statements nor whether money has been lost. We are sending copies of this report to the Chairman of the Board of Governors of the Federal Reserve System; the Secretary of the Treasury; the Chairman of the House Committee on Banking and Financial Services; the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs; and the Director of the Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-9510 if you or your staff have any questions. Appendix I contains comments we received from the Federal Reserve on a draft of this report and our response to those comments. Major contributors to this report are listed in appendix II. The following are comments on the Board of Governors of the Federal Reserve System’s letter dated September 12, 1996. 1. As discussed throughout this report and as described in attachment 2, page 9, second paragraph, of the Federal Reserve’s response to this report, the basis for the information reported in the monthly currency activity reports and the L.A. Bank’s financial accounting records is the same—its cash inventory system called the Cash Automation System. Because this information comes from the same source, it should agree; however, we found that the L.A. Bank could not make the information agree without forcing numbers. 2. Because the FRB could not provide the detailed general ledger transactions that had been recorded for the currency in its financial accounting records, we were unable to determine whether the financial statements and the related general ledger were accurate. However, the FRB’s inability to reconcile activity in its cash inventory system with the general ledger without forcing numbers and the serious internal control problems that we identified in our review raise questions about whether the cash inventory records, the L.A. Bank’s new procedures for summarizing the information from its cash inventory records, the general ledger, or all three are incorrect. Beyond the concerns we raised about the integrity of its accounting and internal controls, other potential impacts on the FRB of the errors we identified were beyond the scope of our review. In particular, without access to the general ledger transactions and without performing an independent review of physical safeguarding and computer security controls, we would be unable to determine whether money was lost or unreasonably exposed to loss due to the problems we found. Further, the currency activity reports and the underlying systems that they are generated from constitute the Federal Reserve’s only detailed records of currency transactions throughout the Federal Reserve System. Therefore, to the extent that the information on monthly currency movement in and out of Federal Reserve Banks is provided to Federal Reserve management, the Congress, and other external users of this information, it would be based on data from the currency activity reports. The impact of errors in this reporting on external users of this information was also beyond the scope of our review. 3. While periodically performing physical inventory counts of the L.A. Bank’s vault is a good internal control, such a count only shows that what was physically in the vault that day equalled what was in its cash inventory and general ledger records the day of the count—in this instance, September 6, 1996. This does not prove that the coin and currency that moved through the Bank at all other times was properly safeguarded and accounted for. Without the ability to consistently summarize currency activity and effective internal controls, the physical inventory of the vault, alone, is insufficient to provide assurances that the FRB is accurately accounting for the billions of dollars that flow through the Bank each week. 4. In the “Agency Comments and Our Evaluation” section of this report, we noted that our review identified serious internal control problems and pointed to the inability of the L.A. Bank to summarize currency activity on a monthly basis. It is our view that these issues do, in fact, raise serious concerns about the integrity of its accounting and internal controls. Because two other district banks and their branches use the same cash inventory system and some of the problems in currency reporting are linked to the limitations in the design of this system, we suggest that problems may also have occurred in the San Francisco District Bank and its other branches and the other two districts using the system. It is worth noting that the gravity of the concerns raised at the L.A. Bank appear to make it prudent to also investigate whether there are problems in these other banks. When we requested their general ledger transactions, we were told by L.A. Bank officials that to provide it in hard copy would be impractical, and they strongly discouraged producing the information, because it would amount to reams of material. When we then requested the records in electronic format, FRB officials stated that it would be burdensome and very difficult and that they did not have the staff to provide it in time for our review. It is for these reasons that we limited the scope of our review. FRB staff also had difficulty retrieving information we requested to support the reconciliations for 2 of the 6 days that we reviewed. L.A. Bank officials have still not located the report containing the ending balance of the amount of currency in the vault as reported in its cash inventory system for one of the days selected in our review, identified earlier in this report, and certain other information we requested. 5. With respect to the two internal reviews cited by the FRB in its comment letter, the review reports had not been finalized at the time of our review and, according to an FRB official, would not be released in time for this report. As a result, we cannot comment on the scope, findings, conclusions, nor quality of the work performed by the FRB’s internal examiners and internal auditors related to these reports. Also, the FRB incorrectly asserted that these reviews were done in accordance with generally accepted auditing standards (GAAS). The work done does not meet the independence standards of GAAS applicable to external auditors.Thus, while the financial examiners’ and internal auditors’ work may be considered independent for purposes of reporting to management, it cannot be relied upon by external auditors. Under professional audit standards, we would have to review the internal financial examiners’ and internal auditors’ work in order to comment on their findings, scope of work, or audit quality. Finally, as noted in attachment 5, third paragraph, of the Federal Reserve’s response to this report, the work performed by the external auditor was primarily based on representations made by Federal Reserve officials and observations. The external auditor obtained the majority of evidential matter used in its review through inquiry and observation. It did not initiate the formal process of verifying the various related statements and representations during its limited review. In addition, as summarized by Federal Reserve officials in attachment 5, the external auditors informed the Board that “they identified no factors that would indicate the potential for inaccuracies or misstatements of the Branch’s cash position as reported in the general ledger or in the balance sheet.” They did not say that “there was no evidence to suggest that statistical errors affected the official records of the Bank’s currency holdings.” Similar to the Board’s financial examiner review, the external auditor’s report was not provided as part of the FRB’s response nor was it provided to us during our review. Thus, we cannot specifically comment on the report’s scope, findings, conclusions, or contextual presentation. 6. See discussion in comment 5 above. The issue of independence applies to both the San Francisco Reserve Bank’s internal auditors and its financial examiners. 7. See our responses made in the “Agency Comments and Our Evaluation” section of this report. 8. We did not conduct a comprehensive review of procedures and controls that are in place to ensure the accuracy of its accounting records and associated financial statements. In particular, we did not review the L.A. Bank’s computer security controls for preventing unauthorized access to its general ledger and cash inventory system or its physical access controls for ensuring that the money it manages is protected from theft and misappropriation. We agree with FRB officials that a number of controls are in place. However, we believe that the errors and internal control problems identified in this report raise serious questions about the effectiveness of the controls in place and the need for additional controls to ensure the accuracy of its accounting and to provide assurance that the money flowing through the Bank is safeguarded. 9. We agree that staff made a number of errors in preparing the currency activity reports. We believe that the actions taken to improve supervisory review and reduce the amount of data that must be collected from manual logs will likely reduce errors in reporting. However, we do not agree that all of the errors were due to procedural errors made by L.A. Bank staff. In fact, the most troublesome aspect of how the reports were prepared was that Bank management directed staff to force the number for receipts from circulation to ensure that the currency activity report agreed with the daily balance sheet for the last day of the month. This is troublesome because it showed that the Bank had difficulty summarizing receipts from circulation independently and it also obscured other errors in the report. 10. See our responses in the “Agency Comments and Our Evaluation” section of this report and our responses to comments 4, 5, 6, and 8. Further, while Bank management ultimately took actions to improve how the currency activity reports were prepared, it was at their direction that staff forced the number for receipts from circulation. Further, we did not find that they were rigorous in their followup of internal control weaknesses. The particular example cited in the comments is one that raised our concern. In an interview with FRB officials, we were told that a processing team intentionally overrode the physical controls in the cash inventory system and processed a deposit despite the fact that the amount they counted did not agree with the amount in the depository institution’s deposit notification. We were not told that they mistakenly credited an institution, as cited in the comments. In addition, at the time of our interview in August 1996, Bank officials said that they did not know what caused the out-of-balance situation that led the team to override the system in November 1995. It would seem that tracking down the cause of such a mistake and what impact it may have on other transactions would have been a top management priority and it was not. Finally, while we note in the report that the mistake was caught at the end of the day through the reconciliation, we also state that the difference should have been identified and corrected immediately when it was found by staff. 11. See our response to comment 3. 12. See our response to comments 4, 5, 6, 8, and 10. 13. We stated in this report that there was no evidence that management reviewed or approved the transactions done by unit proof clerks to delete transactions from the general ledger. Nothing was provided to us to show otherwise, other than L.A. Bank management officials asserting that they did it. We reaffirm this finding. 14. See our response to comments 4 and 5. 15. See our response in the “Agency Comments and Our Evaluation” section of this report and our response to comments 1 and 2. 16. The FRB comment addresses what it states are its new procedures for preparing the monthly currency activity reports. However, our report addresses the procedures used to prepare the monthly currency activity reports for October through December 1995. These amounts were forced and the L.A. Bank did not perform any reasonableness check for the receipts from circulation amount to see if it was within the $3 million tolerance—this is the line item in the report that was and continues to be used to make the report equal the general ledger balance. The practice of checking this amount for reasonableness to verify that it was within the allowed tolerance was begun as part of new procedures that were implemented in 1996. Jan M. Brock, Senior Auditor Ted C. Hu, Auditor Stacey C. Osborn, Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed currency activity reports prepared by the Los Angeles Federal Reserve Bank, focusing on: (1) problems in reporting currency activity for Federal Reserve System (FRS) note receipts, payments, and amount on hand; and (2) corrective actions planned or taken by FRS to resolve those problems. GAO found that the Los Angeles Bank: (1) incorrectly prepared and filed monthly currency activity reports for October, November, and December 1995; (2) did not comply with FRS guidance to independently determine cash receipts, since it adjusted the amount reported on the currency activity reports to ensure that it agreed with the daily balance sheet for the end of the month; (3) has been forcing the amount of receipts reported, which has resulted in significant underreporting of receipts, for some time; (4) has attempted to revise the reports but has not sent them to the FRS Board of Governors; (5) uses a cash inventory system to prepare its monthly reports, but system limitations hinder the Bank's ability to track all detailed activities; and (6) could not timely provide all of the information needed to perform a comprehensive review of internal controls and accounting practices because of information storage and retrieval problems. GAO also found that: (1) a limited review of the Bank's currency accounts identified several internal control weaknesses that could affect general ledger entries and reconciliation attempts; and (2) other Federal Reserve Banks that use the same cash inventory system could also be affected.
Grants to states to develop community health centers were first authorized by the federal government in the mid-1960s. By the early 1970s, about 100 health centers had been established by the Office of Economic Opportunity (OEO). When OEO was phased out in the early 1970s, the centers supported under this authority were transferred to the Public Health Service (PHS). Since 1989, close to $3 billion has been awarded in project grants to health centers. Project grants are authorized under Sections 329 and 330 of the Public Health Service Act and are to be used by health centers to provide primary health care and related services to medically underserved communities. BPHC sets policy and administers the Community and Migrant Health Center program. BPHC is part of the Health Resources and Services Administration (HRSA) under PHS. Ten regional PHS offices assist BPHC with managing the program. The regional offices are primarily responsible for monitoring the use of program funds by grantees. In 1994, the Community and Migrant Health Center program offered comprehensive primary health care services to about 7.1 million people through 1,615 health care delivery sites in medically underserved areas. Health centers are expected to target their services to those with the greatest risk of going without needed medical care. About 44 percent of health center patients are children under 19 years old and 30 percent are women in their childbearing years. About 60 percent of health center patients live in economically depressed areas and nearly 63 percent have incomes below the federal poverty level. A central feature of health centers is their governance structure. Local community boards govern health centers and are expected to tailor health center programs to the community they serve. In addition to comprehensive primary care services and case management, centers are expected to offer enabling services. These services are determined from assessments of community needs and are intended to help individuals overcome barriers that could prevent them from getting needed services. Health centers are supported by various funding streams. Community health center project grants and Medicaid provide the two largest components of health center revenues, respectively, 35 and 34 percent in 1994. Health centers may also receive other federal, state, and local grants to support their activities. While health centers are required to offer services to all individuals regardless of their ability to pay, centers must seek reimbursement from those who can pay as well as from third-party payers such Medicaid, Medicare, and private insurance. Patient fees are set using a sliding fee schedule that is tied to federal poverty levels. Patients with incomes below a certain percentage of the federal poverty level receive free care or may pay some portion—a discounted fee—while those in the highest income levels pay fees that cover the full service charge. The difference between service charges and the sliding fees collected is a measure of the amount of low-income care subsidized by the center. Two major developments in recent years have affected the financial status and, therefore, the viability of health centers. The first is the authorization of a cost-based reimbursement system for health centers and the second is centers’ participation in prepaid managed care. In the late 1980s, the Congress recognized that neither Medicare nor Medicaid paid the full cost for services provided to program beneficiaries at community health centers. This was due to low reimbursement rates and because some enabling services provided by health centers were not considered as reimbursable benefits by Medicaid. As a result, health centers had fewer financial resources to subsidize care for patients who could not pay and for conducting other program activities. In recognition of this problem, the Congress—as part of the Omnibus Budget Reconciliation Act of 1989 (OBRA )—created a new Medicaid and Medicare cost-based reimbursement system for health centers. Under this system, both programs were required to reimburse health centers for the reasonable cost of medical and enabling services provided to their beneficiaries. The second major development has been the move by states to managed care delivery systems for their Medicaid programs to address rising costs and access problems. Managed care in Medicaid is not a single health care delivery plan but a continuum of models that share a common approach. At one end of the continuum are prepaid or capitated models that pay health organizations a per capita amount each month to provide or arrange for all covered services. At the other end are primary care case management (PCCM) models, which are similar to traditional fee-for-service arrangements except that providers receive a per capita management fee to coordinate a patient’s care in addition to reimbursement for the services they provide. Both systems require that beneficiaries access care through a primary care provider. Between June 1993 and June 1994, the total number of Medicaid beneficiaries in managed care programs across the country increased 57 percent, from almost 5 million to nearly 8 million, with most of the growth occurring in fully capitated managed care programs. Health centers may not be as assured that capitated reimbursement will cover their costs as they are under traditional Medicaid fee-for-service systems. This becomes a concern when health centers lose their cost-based reimbursement under Medicaid prepaid managed care programs. Health plans that contract with centers reimburse them on the basis of a negotiated per capita rate for a set of services. This capitation rate must be sufficient to cover the cost of the contracted services for all Medicaid health plan members enrolled at the health center. Incorrect assumptions about the cost of individual services or the frequency with which they are used may result in an inadequate capitation rate. If the rate is too low, it can lead to financial losses for the centers. States establishing managed care programs that require beneficiaries to enroll in a Medicaid health plan must obtain one of two types of waivers from the Health Care Financing Administration (HCFA). Section 1115 of the Social Security Act offers authority to waive a broad range of Medicaid requirements. Eight states have approved statewide 1115 waivers, and 12 others have waiver proposals pending with HCFA. A second type of waiver is allowed by section 1915(b) of the Social Security Act. These waivers allow states to carry out competitive programs by waiving specific program requirements, such as a beneficiary’s choice of provider. Currently, 37 states and the District of Columbia have 1915(b) waivers and 4 other states have pending waivers. The loss of cost-based reimbursement is a major concern for health centers entering into prepaid capitated agreements. These health centers are concerned that (1) the per capita monthly rate may not adequately cover the costs of providing services to the most vulnerable populations and (2) the lack of reimbursement by health plans for some medical, enabling, or other health services may hinder their ability to continue to provide them. Changes in the health care delivery environment are impacting community health centers as more and more health centers participate in prepaid managed care arrangements. In our review of 10 health centers, we found that prepaid reimbursement for services provided to Medicaid patients did not diminish the centers’ ability to provide access to care for their patients. In fact, health centers have improved their overall financial positions to some degree while maintaining or expanding medical and enabling services. This is due to revenue increases from a variety of sources, such as federal funding other than health center grants. Earnings from prepaid managed care were modest and did not contribute significantly to the support of enabling services and subsidized care. Some center officials, however, credited the predictability of monthly capitation payments as assisting them in financial planning. Using another measure to determine financial vulnerability—cash balances—all 10 centers had limited cash balances. For centers with more than 15 percent of their total revenue from prepaid managed care, low cash balances could be a problem if they encounter significant unexpected expenses resulting from inadequate capitation rates or assumption of risk for nonprimary care services. In response to the changing health care environment, the number of health centers accepting capitated payments for their Medicaid patients grew from 92 health centers, with 280,000 prepaid patients in 1991, to 115 centers with nearly 435,000 prepaid patients in 1993. Health centers often feel pressure to enter into managed care arrangements when states implement such programs on a mandatory or voluntary basis statewide. Five of the 10 health centers we visited operate in areas where Medicaid beneficiaries are mandated to participate in prepaid managed care plans under Medicaid waivers. Increasingly, health centers also choose to participate in areas with voluntary programs. Whether mandatory or not, health center participation is driven by the growing importance of the Medicaid program to health center revenues. In 1993, Medicaid revenues accounted for 17 percent to over 50 percent of health center revenues at the centers we visited. In addition, between 1989 and 1993, 6 of the 10 health centers experienced an increase in the ratio of Medicaid revenues to total revenues. At the same time, 9 health centers experienced a decrease in the amount that federal community health center project grants represented of total revenues. ,8 (See fig. 1.) Except for Sunshine Health Center and Lynn Community Health Center, which received, respectively, 22 and 17 percent of their revenues in 1993 from other federal grants, contracts, or both, the remaining health centers received less than 7 percent of their revenues directly from other federal grants. Some of these health centers have also increased the percentage of their revenues from other income sources, such as state and local grants or other federal grants. Except for Sunshine Health Center and Lynn Community Health Center, which received, respectively, 22 and 17 percent of their revenues in 1993 from other federal grants, contracts, or both, the remaining health centers received less than 7 percent of their revenues directly from other federal grants. Some of these health centers have also increased the percentage of their revenues from other income sources, such as state and local grants or other federal grants. Except for Sunshine Health Center and Lynn Community Health Center, which received, respectively, 22 and 17 percent of their revenues in 1993 from other federal grants, contracts, or both, the remaining health centers received less than 7 percent of their revenues directly from other federal grants. The degree to which health centers were involved in prepaid managed care varied considerably among the 10 health centers. In 1993, prepaid managed care accounted for as little as 3 percent and as much as 52 percent of the total health center revenues (see fig. 2). Differences also existed in the percentage that prepaid managed care revenues represented of total Medicaid revenues, ranging from about 12 to 100 percent of total Medicaid revenues among the 10 centers. Typically, health centers participate in prepaid managed care through health plans serving Medicaid beneficiaries. The health centers contract with one or more health plans to provide a subset of health plan services. Reimbursement for primary care services at the 10 health centers we reviewed was paid as a monthly capitated rate. The capitation rates for primary care services ranged from $12 per member per month at one health center to $38 per member per month at another. Rates varied in large part because of the different services covered under health plan contracts. For example, a center receiving a higher rate may provide additional services, such as X rays and immunizations. If a center with a lower rate provides these services to plan enrollees, it could receive additional reimbursement on a fee-for-service basis. Some centers also told us that they had received a higher rate because they had negotiated for one with the health plan. In addition to agreeing to provide primary care services, four health centers have assumed financial responsibility for referrals, hospitalization, or both in return for a higher capitation rate. In such arrangements, the managed care plan withholds a portion of the health center’s primary care capitation payment to cover referral or hospitalization costs that are higher than expected. In some cases, if the funds withheld are insufficient to cover the losses, the amount withheld in the future from health center capitation payments can be increased. Despite the concern that capitation would make it difficult for health centers to maintain their service levels, we found that the 10 centers continue to offer many services targeted to the needs of their communities and that they have maintained the intensity and frequency of the services provided. In addition to medical care, many of the health centers offer transportation and translation services as well as health education, acquired immunodeficiency syndrome (AIDS) case management, and early intervention services for children of substance abusers. These enabling services are very important in reducing the barriers to health care as well as helping to address problems that can lead to the need for further medical care. In addition, these services are available to all health center patients including those whose benefit package may not cover the cost of these services. (See fig. 3 for a list of the enabling services provided at each health center.) Indicators of a health center’s ability to increase access to the community it serves include growth in the number of patients served and in the amount of funds spent on subsidizing low-income care. All the health centers increased access to medical care. The number of medical patients served by the health centers increased from 131,000 to almost 169,000 from 1989 to 1993, with individual center increases ranging from 4 to 164 percent (see fig. 4). In addition, the number of patient visits or encounters increased from 596,063 to 828,848 between 1989 and 1993 at the 10 health centers. Between 1989 and 1993, 7 of the 10 health centers increased their spending on subsidized low-income care; that is, the amount of spending for free care and the remaining portion of care that uninsured low-income patients are unable to cover (see fig. 5). We examined the growth of spending on enabling services in each health center, another indicator of a health center’s ability to increase access to care. We found that all 10 of the health centers had increased spending on these services between 1989 and 1993 (see fig. 6). Further, health center officials told us that enabling services were expanded or enhanced in response to growing community needs. In addition, officials at all 10 centers reported that the intensity or frequency of services typically provided at the center had not been reduced with prepaid managed care. While the amount of spending on enabling services and subsidized low-income care generally increased among all health centers, these amounts varied considerably from center to center as did the distribution of spending between enabling services and subsidized care of low-income patients. In most cases the sum of spending on enabling services and subsidized care exceeded revenues received from the Community and Migrant Health Center program grant (see fig. 7). With more spending on enabling services, 9 of the 10 health centers increased the number of full-time-equivalent staff involved in providing services other than medical or dental. These included health education, social services, and case management. Staff providing these services included drivers for transportation services, outreach workers, dietary technicians, and home health aides (see fig. 8). Center officials told us that community needs largely influenced patterns of spending on enabling services and to subsidize low-income care. For example, the health centers that we visited in densely populated areas spent more money on enabling services, which include social case workers, than the other centers. The health centers in less populated areas tended to subsidize low-income care to a greater extent. Officials also reported that changing local community conditions—such as an increase in drug abuse or AIDS—could affect the combination of enabling services and subsidized care. While maintaining or expanding their medical and enabling services, all the health centers that we studied reported improved financial positions, as indicated by increases in their year-end fund balances; that is, the excess between a center’s assets and liabilities. One contributing factor is an increase in total revenues. Among the 10 health centers, increases in total revenues ranged from 35 percent to 142 percent between 1989 and 1993. Three of the centers saw revenue increases of over 100 percent during this period. Improvement in fund balances results when increases in revenues from a variety of sources are greater than a center’s expenses. Five centers had increases in grants from other federal and state sources. For example, one health center received $556,000 from a Ryan White AIDS grant in 1993. All health centers, however, had increases in their Medicaid revenue between 1989 and 1993. Increases ranged from 12 percent at one center to over 1,000 percent at another. Medicaid prepaid managed care income also contributed modestly to fund balance increases. Prepaid managed care earnings were modest at best and played a small role in supporting enabling services and subsidized care. In 1993, three centers reported losses of up to $124,000 from prepaid managed care. Other funds offset these losses. During the same year, six centers reported excess prepaid managed care revenues of up to $100,000 after paying the cost of care for medical services and administrative expenses. One center reported no excess revenues from prepaid managed care. Officials at nine of the health centers told us that returns from managed care had not contributed significantly to center support of enabling services and subsidized care. At the tenth center, however, the director told us that growth in managed care revenues had allowed the center to increase its spending on subsidized care. Between 1989 and 1993, the center’s health center grant funding remained level, while the amount of spending on subsidized care grew from nearly $1.6 to $2.5 million revenues from prepaid managed care contributed to the spending on subsidized care. At the same time, the director noted that the federal health center grant was indispensable to the center’s maintaining a steady level of funding for enabling services and subsidized care. Officials from three health centers told us that the predictability of monthly capitation reimbursements allowed them to better manage center finances. Although all the health centers have increased their year-end fund balances, some may be vulnerable to financial difficulties. While all 10 health centers had year-end fund balance increases, none of the centers had cash on hand to cover more than 60 days of operating expenses.Cash on hand ranged from fewer than 1 day of operating expenses at 2 centers to 31 days’ worth at another. Three centers only had available cash to cover fewer than 10 days of operating expenses. Cash reserves are important because they represent liquid assets that can be used to pay for contractual obligations and unexpected expenses. Funds for unexpected expenses are especially critical for health centers with more than 15 percent of total revenues from prepaid managed care arrangements and those that have accepted financial responsibility for services other than primary care. For example, when centers take on risk for medical care and hospitalization but more patients than expected require costly treatment or extended hospitalization, losses could be substantial. We found that seven centers received more than 15 percent of their total revenue from prepaid managed care. The four centers that have assumed financial responsibility for specialty referrals, hospitalization, or both all had cash reserves of 31 or fewer days of operating expenses, thereby making them vulnerable to financial difficulties. Centers can also be financially vulnerable when capitation rates do not fully cover the cost of the care they provide. Centers are faced with either depleting their reserves or cutting back services. Several health center directors told us that their capitated reimbursements are adequate to cover the costs of medical services and some believed that their capitation rate roughly equaled what they would receive from cost-based reimbursement. In most cases, however, center directors could not provide us with data to substantiate their position. While the health centers we visited are now providing medical and enabling services to their communities, some initially faced several problems that are likely to confront other health centers as states expand Medicaid managed care. First, health centers must determine whether not participating in managed care arrangements will affect the number of patients served or revenues needed for financial viability. Centers that do participate may face financial problems if reimbursement is inadequate and they accept too much financial risk or lack managed care skills. Directors of most of the health centers we visited felt compelled to enter into agreements with Medicaid managed care plans to maintain their Medicaid patient population and revenues. The Medicaid population is an important component of the medically underserved population that health centers are intended to serve. Health centers that do not have agreements with Medicaid health plans can lose some or all of their Medicaid patients and revenues, jeopardizing their continued operation. Because Medicaid revenue is a large and growing part of most health centers’ funding, losing this funding could be catastrophic. In 1994, a health center in Washington state experienced severe financial difficulties when its relationship with the only local Medicaid health plan was discontinued. The structure of the health plan, which limits membership to individual physicians, made it impossible for the health center to contract directly with the plan. Rather, one physician employed by the center contracted with the plan. When this physician resigned from the center, its relationship with the plan ended. The center’s other physicians were not acceptable to the health plan because of concerns about the physicians’ admitting privileges at the local hospital and their ability to guarantee 24-hour coverage or because the physicians were not willing to contract with the plan. Because all Medicaid beneficiaries in the health center’s service area were enrolled in this health plan, the center lost 1,000 Medicaid patients when they were assigned to other health plan providers. As a result, the center abruptly lost one-third of its patients and 17 percent of its revenue over a 7-month period. The center’s director told us that without this revenue the center was not viable and eventually would have to close. The center reestablished its relations with this health plan when the physician returned and Medicaid patients are being reassigned. Also in 1994, health centers in another state, Tennessee, faced the loss of Medicaid revenues if they did not participate in the TennCare program. As a result, all the health centers in Tennessee participate in the TennCare program despite their loss of cost-based reimbursement. Health centers had no choice but to contract with the TennCare health plans, according to the director of the Tennessee Primary Care Association, an association of community health centers in Tennessee. Health centers felt compelled to participate because the Medicaid population is an important part of the health centers’ target population. In addition, without the Medicaid revenue, health centers would not be able to continue to offer the range of services they typically provide. Some center officials believed that centers would have closed without this revenue. While the 10 health centers we studied expanded their support for enabling services between 1989 and 1993, the early experience of 3 of these centers with managed care was problematic. Each reported initial depletion of financial resources, and in one case a cutback in services occurred as well as a reorganization due to bankruptcy. Early center problems stemmed from inadequate capitation rates paid to health centers; assignment of more financial risk to health centers than they were capable a lack of managed care knowledge, expertise, and systems. Low primary care capitation rates and assignment of financial risk for referral services contributed to financial difficulties at two Philadelphia health centers in 1987 and 1988, according to health center and BPHC officials. Because the capitation rate did not fully cover the centers’ operating costs, the centers were forced to deplete their cash balances to continue providing services. Both centers reported that they could not negotiate higher rates or avoid accepting too much financial risk in part because the Medicaid beneficiaries were all assigned to one health maintenance organization. This left the health centers in a poor position to negotiate a higher capitation rate or different risk arrangements. Since that time, competing health plans have been added to the Medicaid managed care program. In addition, the health centers are more knowledgeable about managed care arrangements. They no longer accept risk for services that they do not provide and have negotiated more acceptable rates. After one of the Philadelphia centers gained experience in tracking managed care operations, it developed data in 1991 showing that the utilization patterns of its health plan enrollees justified a higher capitation rate. An Arizona health center also suffered financial difficulties once it entered into Arizona’s Medicaid managed care program, established in 1982. According to the center’s current director, capitation rates were inadequate to cover the costs of serving patients in Arizona’s Medically Needy/Medically Indigent eligibility category. In the early 1980s, the center had accepted financial risk for all medical services, including referrals and hospitalizations for its enrollees. Further, the center did not have adequate information systems to manage the risk it had assumed or adequate capital to absorb losses. Within 4 years the center became insolvent and reorganized under chapter 11 of the Federal Bankruptcy Code. It was forced to cut back on its medical and enabling services as it reorganized through bankruptcy in 1986 after experiencing large managed care losses. The health center has completed its restructuring and is now a provider for several health plans. In addition, the health center no longer accepts full financial risk for referrals or hospitalizations. The explosive growth in Medicaid managed care leaves many community health centers with little choice about participating in these new arrangements. However, health centers entering prepaid arrangements are faced with a series of new activities, each of which they must manage well to succeed. First, they must negotiate a contract that pays an adequate capitation rate and does not expose them to undue risk or otherwise hinder them. They must also perform the medical management functions of a prepaid system. In addition, health centers must monitor their financial positions under each managed care agreement, including any liability for referral and hospital services. They must also develop and maintain the information systems needed to support the above clinical and financial management activities. BPHC has strongly encouraged health centers to consider participating in managed care arrangements, while cautioning them of the dangers of accepting risk for services provided by others. Further, BPHC is funding a number of activities to help health centers become providers that can effectively operate in a managed care system. Recognizing that health centers require both specific and general knowledge of managed care, BPHC cooperates with the National Association of Community Health Centers to provide training and technical assistance to grantees. Several training sessions are available to BPHC grantees. Subjects include managed care basics, negotiating a managed care contract, medical management, and rate setting. In 1994, 48 sessions in 35 states were provided, reaching over 1,500 individuals. Technical assistance consists of intensive one-on-one consultations between managed care experts and health center officials. During 1994, 65 health centers requested and received one-on-one technical consultations. BPHC has also developed various publications for health centers to use as self-assessment tools. These publications offer guidance on aspects of managed care such as preparing for prepaid health services, negotiating with managed care plans, and assessing the market area and internal operations. Realizing that health centers lack experience in negotiating contracts with health plans, BPHC offers a contract-review service between centers and health plans. These contracts are typically reviewed by outside private-sector managed care specialists who provide written advice on specific sections that could be revised more favorably for health centers. In 1994, BPHC reviewed 45 contracts for approximately 30 health centers. In addition to activities targeted toward individual health centers, BPHC also assists centers in planning and initiating participation in managed care arrangements through the ISN, established in 1994. These one-time awards are to be used by health centers for planning and developing an integrated delivery system with other providers that will ensure access for the medically underserved. Approximately $6 million was awarded to 29 health centers in 1994. One of the health centers we visited in Florida is using an ISN award to develop a network of community health centers that can negotiate with managed care plans. In Washington state, a health center received an ISN award to help establish a statewide Medicaid managed care plan. As states move to prepaid managed care to control costs and improve access for their Medicaid populations, the number of participating health centers continues to grow. Medicaid prepaid managed care is not incompatible with health centers’ mission of providing access to health care for medically underserved populations. However, health centers face substantial risks and challenges as they move into these arrangements. Such arrangements require new knowledge, skills, and information systems. Centers lacking this expertise face an uncertain future and those in a vulnerable financial position are at even greater risk. Today’s debate over possible changes in federal and state health programs—including Medicaid and other health grant programs, important funding streams for health centers, and the lack of available cash at all 10 centers—heightens the concern over the financial vulnerability of centers participating in prepaid managed care. If this funding source continues to grow as a percentage of total health center revenues, centers must face building larger cash reserves while not compromising medical and enabling services to the vulnerable populations that they serve. HRSA and BPHC officials reviewed a draft of this report and considered it a balanced presentation of the challenges facing community health centers involved in Medicaid prepaid managed care arrangements. We also incorporated their technical comments as appropriate. We are sending copies of this report to the Secretary of Health and Human Services and other congressional committees. Copies will be made available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-7119; Rose Marie Martinez, Assistant Director, at (202) 512-7103; or Paul Alcocer at (312) 220-7615. Other contributors to this report include Jean Chase, Nancy Donovan, and Karen Penler. Founded in 1979, Mountain Park Health Center (MPHC) was formerly known as Memorial Family Health Center and was part of Phoenix Memorial Hospital. In 1987, MPHC became a community-organized primary care center. The center operates in urban South Phoenix, described as the “most multicultural community in Arizona.” Seventy-five percent of the center’s patients are Hispanic and 18 percent are African American. AIDS and infant mortality are among the health problems in South Phoenix, where the infant mortality rate for African Americans is 17.3 per 1,000 live births. Seventy-eight percent of the center’s patients are at or below the poverty level. Sixty-eight percent have Medicaid coverage and 14 percent are uninsured. This center’s rural-based service area consists of a main site in Surprise, Arizona, and two other sites; one in Queen Creek and another at Gila Bend. Eighty-eight percent of Clinica Adelante’s population is Hispanic. Thirty-nine percent of the center’s patients are migrant and seasonal farmworkers. Major health problems in the population covered by the center include a lack of adequate prenatal care, inadequate postpartum visits and newborn checks in the perinatal population; infectious diseases, inadequate nutrition, and dental decay in the pediatric population; and diabetes, hypertension, and cardiovascular disease in the adult population. Twenty-nine percent of the center’s patients have Medicaid coverage and 67 percent of them have no insurance coverage at all. Eighty-five percent are at or below the poverty level. Established 25 years ago, the El Rio health center consists of a main clinic and seven satellite clinics that provide medical and other services to the medically underserved in Tucson. With the majority of patients residing on the south and west sides of Tucson, the significant geographical barriers to health care access are isolation and the remoteness of these locations as well as poor public transportation. The locations of other health care facilities can be at a considerable distance from where most of the patients reside. In addition, language and cultural differences characterize the patients of the El Rio center. Almost one in seven households in the center’s service area routinely uses a language other than English in the home. Other factors exacerbating access to services are proximity to the U.S. border with Mexico, a large undocumented population and a local and transient homeless population. The El Rio service area has a higher proportion of Hispanics to the total population, 55 percent versus 23 percent. Twenty-two percent of other center patients are white and 14 percent are American Indian. Seventy-eight percent are at 100 percent or below the poverty level. Forty-one percent of center patients have Medicaid coverage and 38 percent are uninsured. Since 1964, Sunshine Health Center, Inc., has provided comprehensive primary medical and dental services to migrant and urban poor residing in Broward County, Florida. The Sunshine Health Center serves a patient population of migrant and seasonal farm workers; emigrants from various countries including Haiti, Jamaica, Puerto Rico, and Nicaragua; and African Americans and whites, most of whom are the poor and the working poor. Thirty-two percent of the center’s patients are white, 30 percent are African American, and 20 percent are Hispanic. Located in a county that leads the United States in the increase in AIDS patients, the center serves a population with high rates of infant mortality and morbidity, sexually transmitted diseases, and chronic disorders such as hypertension and diabetes. Ninety-three percent of the patients of the center are at or below the poverty level. Thirty-eight percent are Medicaid patients and 58 percent have no insurance. From its 1967 start in a trailer, the Economic Opportunity Family Health Center (EOFHC) has evolved into its main center, six satellite centers, and affiliated school outreach programs serving the north and northwest areas of Dade County. Dade County has a large and rapidly growing AIDS population, significant substance abuse problems, a large migratory farmworker population, and minority populations with extremely high incidence of tuberculosis, sexually transmitted diseases, and infectious diseases. The population served by EOFHC is 70 percent African American and 20 percent Hispanic. Sixty-six percent of center revenues are generated primarily from the federal government. This center began operations in 1967 to provide family planning and general health services to women. Located in the West Park section of Philadelphia, Pennsylvania, the center serves an area characterized by high infant mortality, low birthweight, teenage pregnancy, and the spread of sexually transmitted diseases including HIV infection. Ninety-nine percent of Spectrum’s patients are African American and 90 percent of the center’s patients are at or below the poverty level. Seventy-one percent of center patients have Medicaid coverage and 27 percent have no insurance. Greater Philadelphia Health Action, Inc. (GPHA) is targeted to provide health care to Philadelphia’s medically underserved population. GPHA operates five primary health care centers, a drug and alcohol counseling and treatment program, a child care program, and two comprehensive school-based clinics. Philadelphia’s health care problems include an infant mortality rate of 14.2 deaths per 1,000 live births; an 11.7-percent low-birth-weight rate; a high teen birth rate of 49 births per 1,000 females (up from 46 per 1,000 in 1988); increasing rates of substance abuse, especially among women; and increasing rates of HIV/AIDS. The vast majority of patients are African American (73 percent) and have incomes at or below 100 percent of the federal poverty level (88.5 percent). Seventy-three percent have Medicaid coverage and 22 percent are uninsured. The Lynn Community Health Center (LCHC) was organized in 1971 as a small storefront mental health center. It has grown into a comprehensive care facility that is the largest provider of outpatient primary care in Lynn, a city characterized by the center’s executive director as the most medically underserved area in Massachusetts. LCHC’s programs focus on people with the greatest barriers to care: the poor, minorities, new immigrants, non-English speaking people, teens, and the frail elderly. Sixty percent of the population served by the center do not consider English to be their first language. At present, Spanish and Russian are the most common languages spoken by the center’s patients. Over 30 percent of LCHC’s staff is bilingual or multilingual and can provide translation services in Spanish, Khmer, Vietnamese, Laotian, and Russian. Forty-five percent of center patients are white, 35 percent are Hispanic, and 11 percent are African American. Sixty-three percent are at or below the poverty level. Fifty-six percent have Medicaid coverage and 31 percent have no insurance. This center was founded in 1972 by a group of mothers living in Worcester’s largest housing project—the Great Brook Valley and Curtis Apartments. These women founded the center because they and their children lacked access to primary care. The center has grown from providing well-child care services to the residents of public housing projects to a comprehensive health center serving the surrounding neighborhood. Special populations requiring services include the perinatal population (in Worcester, rates in two areas—infant mortality and low-birth-weight infants—have been above the state average for the past decade) and the Spanish-speaking elderly population who are monolingual. In addition, the HIV/AIDS epidemic is growing in Worcester, particularly among the minority populations and among the estimated 4,000 injection drug users in the city. In addition, adolescents are exposed to high levels of stress, violence, and depression. The Hispanic community represents 76 percent of center patients. Ninety-five percent of those using the center are at or below the poverty level. Fifty-five percent are covered by Medicaid and 30 percent have no insurance. Roxbury Comprehensive Community Health Center (RoxComp), established in 1969 by a mother concerned about the lack of medical services in the Roxbury community, is the largest community health center serving the Roxbury and North Dorchester areas. Health status indicators for these communities are higher than the national average. For example, the infant mortality rate is twice the national average of 10.1 per 1,000 live births. The area served by the center also exceeds the national average in deaths from heart disease, cancer, stroke, pneumonia, influenza, cirrhosis, homicide, suicide, and injuries. Approximately 20 percent of reported AIDS cases in Boston come from this area. Substance abuse among patients 19 years old and younger and among pregnant women is a problem in the area. Residents served by the center are poor, with 91 percent at or below the poverty level. Eighty-eight percent of center patients are African American. Sixty-two percent have Medicaid coverage and 26 percent have no insurance. To examine how Medicaid prepaid managed care affected community health centers’ ability to continue their mission of providing community-based health care to underserved populations, we first selected a nonrandom judgmental sample of states with a variety of Medicaid managed care situations. The states included Arizona, Florida, Massachusetts, and Pennsylvania, whose prepaid managed care programs included (1) mandatory and voluntary enrollment of beneficiaries, (2) statewide and more geographically limited programs, and (3) capitated Medicaid programs implemented with and without waivers (see table II.1). Table II.1: Characteristics of Four State Programs 1915(b)1915(b)1915(b) In each state, we then visited selected health centers that had prepaid managed care plans operating in their areas for at least 3 years and gathered at least 5 years’ worth of audited financial statements. Program data for the same period were obtained from health center responses to the Bureau of Primary Care’s Common Reporting Requirements. To determine whether health centers were encountering financial difficulties while engaged in prepaid managed care operations, we compiled data on their financial positions. Specifically, we reviewed data on year-end fund balances, which represent the excess between center assets and their liabilities. In addition, we calculated the number of days of operating expenses that cash balances could support. We analyzed program data in several different ways. To determine whether health centers were maintaining access for underserved and vulnerable populations, we compiled data on the number of patients served and the number of patient encounters—a proxy measure for patient visits. To determine whether health centers were continuing to provide enabling services to their communities, we compiled data on spending for other health and community services, including transportation and translation services. In addition, we reviewed the number of full-time-equivalent staff hired to provide these services. To determine whether health centers were continuing to provide care to indigent and low-income patients, we compiled data on the amount of subsidized care. To determine whether health centers’ sources of funds were changing under prepaid managed care, we compared these sources to total receipt of funds. We also conducted work in two states that have more recently begun capitated Medicaid managed care programs—Tennessee and Washington. Washington is making specific accommodations for health centers as it implements its Healthy Options program and is helping the centers establish their own Medicaid health plan. In contrast, Tennessee has so far not made programmatic changes to accommodate health centers, such as requiring their inclusion as providers. At all the health centers we visited, we toured the facilities and interviewed administrators. We also interviewed officials of health plans operating in the area, some that contracted with health centers and some that did not; state community health center associations; and state Medicaid officials. We also interviewed BPHC, HRSA, and National Association of Community Health Center officials. Because we selected our sites judgmentally, our results do not necessarily represent all health centers’ experience with prepaid managed care but illustrate the kinds of issues faced by health centers in these systems. Our work was performed between January 1994 and March 1995 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the effects of managed health care on community health centers, focusing on: (1) whether centers participating in prepaid managed care have been able to provide medical services without jeopardizing their financial position; (2) lessons learned from centers' experiences in prepaid managed care; and (3) whether the Bureau of Primary Health Care (BPHC) prepares community health centers to operate under prepaid managed care systems. GAO found that by 1993: (1) almost 500,000 community health center patients were covered by prepaid managed care arrangements; (2) the 10 centers surveyed were able to continue to provide full services to their vulnerable clients in part due to other revenue sources; (3) all 10 centers increased their patient load and spending for a variety of services, while 7 centers also increased their spending for uncompensated care; (4) all 10 centers improved their financial condition due to increased revenues from a variety of sources; and (5) 3 centers had losses of up to $124,000, while 6 centers had excess revenues of up to $100,000 from prepaid managed care. GAO also found that: (1) the centers may be financially vulnerable if they depend on Medicaid prepaid managed care for a sizeable portion of their revenues, have inadequate capitation rates, and have financial responsibility for other than primary care services or rely on other federal and state funding sources; (2) lessons learned from centers' experiences with prepaid managed care include the likely loss of patients if the centers fail to participate, low capitation rates, assumption of too much financial risk, and the lack of managed care skills; and (3) to encourage centers' participation in prepaid managed care, BPHC has implemented an initiative to fund centers' efforts to develop delivery networks with other health providers for managed care operations.
Because DOD is one of the largest and most complex organizations in the world, overhauling its business operations represents a huge management challenge. In fiscal year 2004, DOD reported that its operations involved $1.2 trillion in assets, $1.7 trillion in liabilities, over 3.3 million in military and civilian personnel, and over $605 billion in net cost of operations. For fiscal year 2005, the department received an annual appropriation of about $417 billion and was appropriated about $76 billion for the global war on terrorism. Execution of DOD’s operations spans a wide range of defense organizations, including the military services and their respective major commands and functional activities, numerous large defense agencies and field activities, and various combatant and joint operational commands that are responsible for military operations for specific geographic regions or theaters of operation. To support DOD’s operations, the department performs an assortment of interrelated and interdependent business processes, including logistics, procurement, health care, and financial management. Transformation of DOD’s business systems and operations is critical to the department providing Congress and DOD management with accurate and timely information for use in the decision-making process. This effort is an essential part of the Secretary of Defense’s broad initiative to “transform the way the department works and what it works on.” The Secretary of Defense has estimated that improving business operations of the department could save 5 percent of DOD’s annual budget, which based on fiscal year 2005 appropriations, represents a savings of about $25 billion. For several years, we have reported that DOD faces a range of financial management and related business process challenges that are complex, long-standing, pervasive, and deeply rooted in virtually all business operations throughout the department. As the Comptroller General testified in April 2005, DOD’s financial management deficiencies, taken together, continue to represent a major impediment to achieving an unqualified opinion on the U.S. government’s consolidated financial statements. To date, none of the military services has passed the test of an independent financial audit because of pervasive weaknesses in internal controls and processes and fundamentally flawed business systems. In identifying improved financial performance as one of its five governmentwide initiatives, the President’s Management Agenda recognized that without sound internal controls and accurate and timely financial and performance information, it is not possible to accomplish the President’s agenda and secure the best performance and highest measure of accountability for the American people. Long-standing weaknesses in DOD’s financial management and related business processes and systems have (1) resulted in a lack of reliable information needed to make sound decisions and report on the status of DOD activities, including accountability of assets, through financial and other reports to Congress and DOD decision makers; (2) hindered its operational efficiency; (3) adversely affected mission performance; and (4) left the department vulnerable to fraud, waste, and abuse, as the following examples illustrate. The current inefficient, paper-intensive, error-prone travel reimbursement process has resulted in inaccurate, delayed, and denied travel payments for mobilized Army Guard soldiers. We found a broad range of reimbursement problems that included disputed amounts for meals that we estimated to be as high as about $6,000 for each of 76 soldiers in one case study that remained unpaid by the end of our review. Until DOD improves the antiquated process that requires Army Guard soldiers to accumulate, retain, and submit numerous paper documents, reimbursement problems and inefficiencies will likely continue. Of approximately 930,000 travel vouchers received between fiscal years 2002 and 2004, the Defense Finance and Accounting Service (DFAS) Contingency Travel Operations Office rejected and returned about 139,000 vouchers to soldiers for additional paper documentation or to correct other processing deficiencies. This repeated churning of vouchers frustrated soldiers and added to the volume of claims to be processed. Injured and ill reserve component soldiers—who are entitled to extend their active duty service to receive medical treatment—have been inappropriately removed from active duty status in the automated systems that control pay and access to medical care. The current stovepiped, nonintegrated systems are labor-intensive and require extensive error-prone manual entry and reentry. Inadequate controls resulted in some soldiers experiencing significant gaps in their pay and medical benefits, causing hardships for the soldiers and their families. In addition, because these soldiers no longer had valid active duty orders, they did not have access to the commissary and post exchange—which allows soldiers and their families to purchase groceries and other goods at a discount. In one case we reviewed, during a 12-month period, while attempting to obtain care for injuries sustained from a helicopter crash in Afghanistan, one Special Forces soldier fell out of active duty status four times. During the times he was not recorded in the system as being on active duty, he was not paid and he and his family experienced delays in receiving medical treatment. In all, he missed payments for 10 pay periods—totaling $11,924. Ninety-four percent of mobilized Army National Guard and Reserve soldiers we investigated during two audits had pay problems. These problems distracted soldiers from their missions, imposed financial hardships on their families, and may have a negative impact on retention. The processes and automated systems relied on to provide active duty payments to mobilized Army Guard and Reserve soldiers are so error-prone, cumbersome, and complex that neither DOD nor, more importantly, the soldiers themselves could be reasonably assured of timely and accurate payments. Some of the pay problems soldiers experienced often lingered unresolved for considerable lengths of time, some for over a year. DOD continues to lack visibility and control over the supplies and spare parts it owns. Therefore, it cannot monitor the responsiveness and effectiveness of the supply system to identify and eliminate choke points. Currently, DOD does not have the ability to provide timely or accurate information on the location, movement, status, or identity of its supplies. Although total asset visibility has been a departmentwide goal for over 30 years, DOD estimates that it will not achieve this visibility until the year 2010. DOD may not meet this goal by 2010, however, unless it overcomes three significant impediments: (1) developing a comprehensive plan for achieving visibility, (2) building the necessary integration among its many inventory management information systems, and (3) correcting long-standing data accuracy and reliability problems within existing inventory management systems. A key to successful implementation of a comprehensive logistics strategy will be addressing these initiatives as part of a comprehensive, integrated business transformation. The Defense Logistics Agency (DLA) and each of the military services experienced significant shortages of critical spare parts, even though more than half of DOD’s reported inventory—about $35 billion— exceeded current operating requirements. In many cases, these shortages contributed directly to equipment downtime, maintenance problems, and the services’ failure to meet their supply availability goals. DOD, DLA, and the military services each lack strategic approaches and detailed plans that could help mitigate these critical spare parts shortages and guide their many initiatives aimed at improving inventory management. The Navy did not know how much it spent on telecommunications and did not have detailed cost and inventory data needed to evaluate spending patterns and to leverage its buying power. At the four case study sites we audited, management oversight of telecommunication purchases did not provide reasonable assurance that requirements were met in the most cost-effective manner. For example, cell phone usage at three sites was not monitored to determine whether plan minutes met users’ needs, resulting in overpayment for cell phone services. In addition, the Navy lacks specific policies and processes addressing the administration and management of calling cards. On one card alone, in a 3-month period, the Navy paid over $17,000. Not until the vendor’s fraud unit raised questions about more than $11,000 in charges in a 6-day period was the card suspended. Over the years, DOD recorded billions of dollars of disbursements and collections in suspense accounts because the proper appropriation accounts could not be identified and charged. Because documentation needed to resolve these payment recording problems could not be found after so many years, DOD requested and received authority to write-off certain aged suspense transactions. While DOD reported that it wrote off an absolute value of $35 billion or a net value of $629 million using the legislative authority, neither of these amounts accurately represents the true value of all the individual transactions that DOD had not correctly recorded in its financial records. Many of DOD’s accounting systems and processes routinely offset individual disbursements, collections, adjustments, and correction entries against each other and, over time, amounts might even have been netted more than once. This netting and summarizing misstated the total value of the write-offs and made it impossible for DOD to identify what appropriations may have been under- or overcharged or to determine whether individual transactions were valid. At December 31, 2004, DOD reports showed that, even after the write-offs, more than $1.3 billion (absolute value) remained in suspense accounts for longer than 60 days; however, DOD has acknowledged that its suspense reports are incomplete and inaccurate. In addition, DOD is still not performing effective reconciliations of its disbursement and collection activity. Similar to checkbook reconciliations, DOD needs to compare its records of monthly activity to Treasury’s records and promptly research and correct any differences. In September 2004, we reported that DOD had begun implementing a financial improvement initiative that included the goal of obtaining an unqualified audit opinion on its fiscal year 2007 consolidated financial statements but that the initiative lacked a clearly defined, integrated, well- documented, and realistic plan for improving DOD’s financial management and thus achieving that goal. We also reported that DOD lacked effective oversight and accountability mechanisms to ensure that the mid-range financial improvement plans being developed by the military services and defense agencies in support of the initiative were adequately planned, implemented, and sustainable. Our report expressed concern that DOD’s emphasis on obtaining a clean audit opinion for fiscal year 2007 could divert limited resources away from ongoing efforts to develop and implement the long-term systems and process changes needed to improve financial information and to efficiently and effectively manage DOD’s business operations. In the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, Congress placed a limitation on the use of operations and maintenance funds for continued preparation or implementation of DOD’s mid-range financial improvement plan. Use of such funds for the mid-range plan is prohibited until the Secretary of Defense submits to the congressional defense committees a report containing the following: (1) a determination that DOD’s business enterprise architecture (BEA) and the transition plan for implementing the BEA have been developed, (2) an explanation of the manner in which fiscal year 2005 operations and maintenance funds will be used by DOD components to prepare or implement the mid-range financial improvement plan, and (3) an estimate of future year costs for each of the military services and defense agencies to prepare and implement the mid-range financial improvement plan. As of the end of May 2005, DOD has not yet provided the defense committees with the required report. Until DOD has complete, reliable information on the costs and number of business systems operating within the department, its ability to effectively control the money it spends on these systems will be limited. DOD’s fiscal year 2005 budget request for its business systems was $13.3 billion, which, on its face, is about $6 billion, or 29 percent, less than its fiscal year 2004 budget request. However, we found that this decrease can be attributed to DOD’s reclassification of some business systems to national security systems, not to a reduction in spending on its systems. While some of the reclassifications appeared reasonable, our analysis showed that others were questionable or inconsistencies exist, which hinder DOD’s ability to develop a definitive business systems inventory. At the same time the amount of requested business system funding declined, the reported number of business systems increased by about 1,900—from 2,274 in April 2003 to 4,150 in February 2005. Furthermore, given that DOD does not know how many business systems it has, it is not surprising that the department continues to lack effective management oversight and control over business systems investments. Since February 2003, the domains have been given the responsibility to oversee the department’s business systems investments, yet the billions of dollars spent each year continue to be spread among the military services and defense agencies, enabling the numerous DOD components to continue to develop stovepiped, parochial solutions to the department’s long-standing financial management and business operation challenges. Additionally, based upon data reported to us by the military services and DOD components, obligations totaling at least $243 million were made for systems modernizations in fiscal year 2004 that were not referred to the DOD Comptroller for the required review, as specified in the fiscal year 2003 defense authorization act. For fiscal year 2005, DOD requested approximately $28.7 billion in IT funding to support a wide range of military operations as well as DOD business systems operations. Of the $28.7 billion, our analysis showed that about $13.3 billion was for business applications and related infrastructure. Of the $13.3 billion, our analysis of the budget request disclosed that about $8.4 billion was for infrastructure and related costs. Business applications include activities that support the business functions of the department, such as personnel, health, travel, acquisition, finance and accounting, and logistics. The remaining $15.4 billion was classified as being for national security systems. Of that amount, our analysis ascertained that about $7.5 billion was for infrastructure and related costs. Of the $13.3 billion, $10.7 billion was for the operation and maintenance of the existing systems and $2.6 billion was for the modernization of existing systems, the development of new systems, or both. Table 1 shows the distribution, by DOD component, of the reported $13.3 billion between current services and modernization funding. Incorrect system classification hinders the department’s efforts to improve its control and accountability over its business systems investments. Our comparison of the fiscal years 2004 and 2005 budget requests disclosed that DOD reclassified 56 systems in the fiscal year 2005 budget request from business systems to national security systems, which are not subject to the same level of investment control. The net effect of the reclassifications was a decrease of approximately $6 billion in the fiscal year 2005 budget request for business systems and related infrastructure. The reported amount declined from about $19 billion in fiscal year 2004 to over $13 billion in fiscal year 2005. In some cases, we found that the reclassification appeared reasonable. The reclassification of the Defense Information System Network initiative as a national security system appeared reasonable since it provides a secure telecommunication network—voice, data, and video—to the President, the Secretary of Defense, the Joint Chiefs of Staff, and military personnel in the field. However, our analysis of the 56 systems also identified instances for which reclassification was questionable. For example, Base Level Communication Infrastructure—initiative number 254—for several DOD entities was shown as a national security system in the fiscal year 2005 budget request. Our review of the fiscal year 2005 budget found that within the Air Force, there were numerous other initiatives entitled Base Level Communication Infrastructure that were classified as business systems, not national security systems. The nomenclature describing these different initiatives was the same. Therefore, it was difficult to ascertain why certain initiatives were classified as national security systems while others, with the same name, were classified as business systems. In another example, this is the first year in which the Navy enterprise resource planning (ERP) effort was listed in the budget and incorrectly classified as a national security system. Its forerunners, four pilot ERP projects, have been classified as business systems since their inception. DOD officials were not able to provide a valid explanation as to why the program was classified as a national security program. For the fiscal year 2006 budget request, the Navy has requested that the DOD CIO reclassify the program from a national security system to a business system. Improper classification diminishes Congress’s ability to effectively monitor and oversee the billions of dollars spent annually to maintain, operate, and modernize the department’s business systems environment. The department’s reported number of business systems continues to rise, and DOD does not yet have reasonable assurance that the currently reported number of business systems is complete. As of February 2005, DOD reported that its business systems inventory consisted of 4,150 systems, which is an increase of approximately 1,900 reported business systems since April 2003. Table 2 presents a comparison of the April 2003 and February 2005 reported business systems inventories by domain. The largest increase is due to the logistics domain increasing its reported inventory of business systems from 565 in April 2003 to the current 2,005. We reported in May 2004 that the logistics domain had validated about 1,900 business systems but had not yet entered most of them into the BMMP systems inventory. Logistics domain officials informed us that they completed that process and this increase was the result. Table 3 shows the distribution of the 4,150 business systems among the components and domains. The table shows the stovepiped, duplicative nature of DOD’s business systems. For example, there are 713 human resources systems across all components whose reported funding for fiscal year 2005 includes approximately $223 million for modernization and over $656 million for operation and maintenance. According to DOD officials, the Defense Integrated Military Human Resources System (DIMHRS) is intended to totally or partially replace 113 of these systems. We were informed that the remaining 600 human resources systems are to be reviewed in the context of DOD’s BEA, as it is developed. In discussing the increase in the number of reported systems, some of the domains stated that funding for many of the systems are not included in the IT budget request. They said that some of these systems were likely developed at the local level and financed by the operation and maintenance funds received at that location and therefore were not captured and reported as part of the department’s annual IT budget request. Financing business systems in this manner rather than within the IT budget results in Congress and DOD management not being aware of the total amount being spent to operate, maintain, and modernize the department’s business systems. We found that DOD is not in compliance with the fiscal year 2003 defense authorization act, which requires that all financial system improvements with obligations exceeding $1 million be reviewed by the DOD Comptroller. Based upon the reported obligational data provided to us by the military services and the defense agencies for fiscal year 2004, we identified 30 modernizations with obligations totaling about $243 million that were not submitted for the required review. Because DOD lacks a systematic means to identify the systems that were subject to the requirements of the fiscal year 2003 defense authorization act, there is no certainty that the information provided to us accurately identified all systems improvements with obligations greater than $1 million during the fiscal year. BMMP officials stated that the domains were responsible for working with the components to make sure that business systems with obligations for modernizations greater than $1 million were submitted for review as required. In essence, compliance was achieved via the “honor system,” which relied on systems owners coming forward and requesting approval. However, the approach did not work. During fiscal year 2004, the number of systems reviewed was small when compared to the potential number of systems that appeared to meet the obligation threshold identified in the fiscal year 2004 budget request. We analyzed the DOD IT budget request for fiscal year 2004 and identified over 200 systems in the budget that could involve modernizations with obligations of funds that exceed the $1 million threshold. However, BMMP officials confirmed that only 46 systems were reviewed, of which 38 were approved as of September 30, 2004. The remaining 8 systems were either withdrawn by the component/domain or were returned to the component/domain because the system package submitted for review lacked some of the required supporting documentation, such as the review by the Office of Program Analysis and Evaluation, if necessary. In an attempt to substantiate that financial system improvements with over $1 million in obligations had in fact been reviewed by the DOD Comptroller, as provided for in the fiscal year 2003 act, we requested that DOD entities provide us with a list of obligations (by system) greater than $1 million for modernizations for fiscal year 2004. We compared the reported obligational data to the system approval data reported to us by BMMP officials. Based upon this comparison and as shown in table 4, DOD provided data showed that 30 business systems with obligations totaling about $243 million in fiscal year 2004 for modernizations were not reviewed by the DOD Comptroller. Examples of DOD business systems modernizations with obligations in excess of $1 million included in table 4 that were not submitted to the DOD Comptroller include the following. DFAS obligated about $3 million in fiscal year 2004 for the DFAS Corporate Database/DFAS Corporate Warehouse (DCD/DCW). In fiscal year 2003, DFAS obligated approximately $19 million for DCD/DCW without submitting it to the DOD Comptroller for review. Additionally, we reported in May 2004 that DFAS had yet to complete an economic analysis justifying that continued investment in DCD/DCW would result in tangible improvements in the department’s operations. The department has acknowledged that DCD/DCW will not result in tangible savings to DOD. Continued investment is being based upon intangible savings of man-hour reductions by DFAS. The Army obligated over $34 million for its Logistics Modernization Program (LMP) in fiscal year 2004. In fiscal year 2003, the Army obligated over $52 million without the prerequisite review being performed by the DOD Comptroller. We have previously reported that LMP experienced significant problems once it became operational at the first deployment site. Cumulatively, since passage of the fiscal year 2003 defense authorization act in December 2002 through the end of fiscal year 2004, based upon information reported to us, the military services and defense components obligated about $651 million for business systems modernizations without the required review by the DOD Comptroller. While this amount is significant, it is not complete or accurate because it does not include any fiscal year 2005 obligations that occurred prior to the enactment of the fiscal year 2005 defense authorization act on October 28, 2004. The statutory requirements enacted as part of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 are aimed at improving the department’s business systems management practices. The act directs DOD to put in place a definite management structure that is responsible for the control and accountability over business systems investments by establishing a hierarchy of investment review boards from across the department and directs that the boards use a standard set of investment review and decision-making criteria to ensure compliance and consistency with the BEA. DOD has taken several steps to address provisions of the fiscal year 2005 defense authorization act. On March 19, 2005, the Deputy Secretary of Defense delegated the authority for the review, approval, and oversight of the planning, design, acquisition, development, operation, maintenance, and modernization of defense business systems to the designated approval authority for each business area. Additionally on March 24, 2005, the Deputy Secretary of Defense directed the transfer of program management, oversight, and support responsibilities regarding DOD business transformation efforts from the Office of the Under Secretary of Defense, (Comptroller), to the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. According to the directive, this transfer of functions and responsibilities will allow the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics to establish the level of activity necessary to support and coordinate activities of the newly established Defense Business Systems Management Committee (DBSMC). As required by the act, DBSMC, with representation including the Deputy Secretary of Defense, the designated approval authorities, secretaries of the military services, and heads of the defense agencies, is the highest ranking governance body responsible for overseeing DOD business systems modernization efforts. I would like to reiterate two suggestions for legislative consideration that I discussed in my July 2004 testimony, which I believe could further improve the likelihood of successful business transformation at DOD. Most of the key elements necessary for successful transformation could be achieved under the current legislative framework; however, addressing sustained and focused leadership for DOD business transformation and funding control will require additional legislation. These suggestions include the appropriation of business system funding to the approval authorities responsible and accountable for business systems investments under provisions enacted by the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, and the creation of a CMO. DOD’s current business systems investment process, in which system funding is controlled by DOD components, has contributed to the evolution of an overly complex and error-prone information technology environment containing duplicative, nonintegrated, and stovepiped systems. We have made numerous recommendations to DOD to improve the management oversight and control of its business systems investments. However, as previously discussed, a provision of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, established specific management oversight and accountability with the “owners” of the various core business mission areas. This legislation defines the scope of the various business areas (e.g., acquisition, finance, logistics, and etc.), and established functional approval authority and responsibility for management of the portfolio of business systems with the relevant under secretary of defense for the departmental core business mission areas and the Assistant Secretary of Defense for Networks and Information Integration (information technology infrastructure). For example, the Under Secretary of Defense for Acquisition, Technology and Logistics is now responsible and accountable for any defense business system intended to support acquisition activities, logistics activities, or installations and environment activities for DOD. This legislation also requires that the responsible approval authorities establish a hierarchy of investment review boards, the highest level being DBSMC, with DOD-wide representation, including the military services and defense agencies. The boards are responsible for reviewing and approving investments to develop, operate, maintain, and modernize business systems for their business-area portfolio, including ensuring that investments are consistent with DOD’s BEA. Although this recently enacted legislation clearly defines the roles and responsibilities of business systems investment approval authorities, control over the budgeting for and execution of funding for business systems investment activities remains at the DOD component level. As a result, DOD continues to have little or no assurance that its business systems investment money is being spent in an economical, efficient, and effective manner. Given that DOD spends billions on business systems and related infrastructure each year, we believe it is critical that those responsible for business systems improvements control the allocation and execution of funds for DOD business systems. However, implementation may require review of the various statutory authorities for the military services and other DOD components. Control over business systems investment funds would improve the capacity of DOD’s designated approval authorities to fulfill their responsibilities and gain transparency over DOD investments, and minimize the parochial approach to systems development that exists today. In addition, to improve coordination and integration activities, we suggest that all approval authorities coordinate their business systems modernization efforts with a CMO who would chair the DBSMC. Cognizant business area approval authorities would also be required to report to Congress through a CMO and the Secretary of Defense on applicable business systems that are not compliant with review requirements and to include a summary justification for noncompliance. As DOD embarks on large-scale business transformation efforts, we believe that the complexity and long-term nature of these efforts requires the development of an executive position capable of providing strong and sustained change management leadership across the department—and over a number of years and various administrations. One way to ensure such leadership would be to create by legislation a full-time executive-level II position for a CMO, who would serve as the Deputy Secretary of Defense for Management. This position would elevate, integrate, and institutionalize the high-level attention essential for ensuring that a strategic business transformation plan—as well as the business policies, procedures, systems, and processes that are necessary for successfully implementing and sustaining overall business transformation efforts within DOD—are implemented and sustained. An executive-level II position for a CMO would provide this individual with the necessary institutional clout to overcome service parochialism and entrenched organizational silos, which in our opinion need to be streamlined below the service secretaries and other levels. The CMO would function as a change agent, while other DOD officials would still be responsible for managing their daily business operations. The position would divide and institutionalize the current functions of the Deputy Secretary of Defense into a Deputy Secretary who, as the alter ego of the Secretary, would focus on policy-related issues such as military transformation, and a Deputy Secretary of Defense for Management, the CMO, who would be responsible and accountable for the overall business transformation effort and would serve full-time as the strategic integrator of DOD’s business transformation efforts by, for example, developing and implementing a strategic and integrated plan for business transformation efforts. The CMO would not conduct the day-to-day management functions of the department; therefore, creating this position would not add an additional hierarchical layer to the department. Day-to-day management functions of the department would continue to be the responsibility of the undersecretaries of defense, the service secretaries, and others. Just as the CMO would need to focus full-time on business transformation, we believe that the day-to-day management functions are so demanding that it is difficult for these officials to maintain the oversight, focus, and momentum needed to implement and sustain needed reforms of DOD’s overall business operations. This is particularly evident given the demands that the Iraq and Afghanistan postwar reconstruction activities and the continuing war on terrorism have placed on current leaders. Likewise, the breadth and complexity of the problems and their overall level within the department preclude the under secretaries, such as the DOD Comptroller, from asserting the necessary authority over selected players and business areas while continuing to fulfill their other responsibilities. If created, we believe that the new CMO position could be filled by an individual appointed by the President and confirmed by the Senate, for a set term of 7 years. As prior GAO work examining the experiences of major change management initiatives in large private and public sector organizations has shown, it can often take at least 5 to 7 years until such initiatives are fully implemented and the related cultures are transformed in a sustainable way. Articulating the roles and responsibilities of the position in statute would also help to create unambiguous expectations and underscore Congress’s desire to follow a professional, nonpartisan, sustainable, and institutional approach to the position. In that regard, an individual appointed to the CMO position should have a proven track record as a business process change agent in large, complex, and diverse organizations—experience necessary to spearhead business process transformation across DOD. Furthermore, to improve coordination and integration activities, we suggest that all business systems modernization approval authorities designated in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 coordinate their efforts with the CMO, who would chair the DBSMC that DOD recently established to comply with the act. We also suggest that cognizant business area approval authorities would also be required to report to Congress through the CMO and the Secretary of Defense on applicable business systems that are not compliant with review requirements and include a summary justification for noncompliance. In addition, the CMO would enter into an annual performance agreement with the Secretary that sets forth measurable individual goals linked to overall organizational goals in connection with the department’s business transformation efforts. Measurable progress toward achieving agreed-upon goals should be a basis for determining the level of compensation earned, including any related bonus. In addition, the CMO’s achievements and compensation should be reported to Congress each year. As previously noted, on April 14, 2005, a bill was introduced in the Senate that requires the establishment of a CMO who would be appointed by the President and confirmed by the Senate, for a set term of 7 years. DOD lacks the efficient and effective financial management and related business operations, including processes and systems, to support the war fighter, DOD management, and Congress. With a large and growing fiscal imbalance facing our nation, achieving tens of billions of dollars of annual savings through successful DOD transformation is increasingly important. Recent legislation pertaining to defense business systems, enterprise architecture, accountability, and modernization, if properly implemented, should improve oversight and control over DOD’s significant system investment activities. However, DOD’s transformation efforts to date have not adequately addressed key underlying causes of past reform failures. Reforming DOD’s business operations is a monumental challenge and many well-intentioned efforts have failed over the last several decades. Lessons learned from these previous reform attempts include the need for sustained and focused leadership at the highest level. This leadership could be provided through the establishment of a CMO. Absent this leadership, authority, and control of funding, the current transformation efforts are likely to fail. I commend the Subcommittee for holding this hearing and I encourage you to use this vehicle, on an annual basis, as a catalyst for long overdue business transformation at DOD. Mr. Chairman, this concludes my statement. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-9095 or kutzg@gao.gov. The following individuals contributed to the various reports and testimonies that were the basis for the testimony: Beatrice Alff, Renee Brown, Donna Byers, Molly Boyle, Mary Ellen Chervenic, Francine DelVecchio, Francis Dymond, Geoff Frank, Gina Flacco, Diane Handley, Cynthia Jackson, Evelyn Logue, John Martin, Elizabeth Mead, Dave Moser, Mai Nguyen, Sharon Pickup, David Plocher, John Ryan, and Darby Smith. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In July 2004, GAO testified before Congress on the impact and causes of financial and related business weaknesses on the Department of Defense's (DOD) operations and the status of DOD reform efforts. The report released today highlights that DOD still does not have management controls to ensure that its business systems investments are directed towards integrated corporate system solutions. GAO's reports continue to show that fundamental problems with DOD's financial management and related business operations result in substantial waste and inefficiency, adversely impact mission performance, and result in a lack of adequate accountability across all major business areas. Over the years, DOD leaders attempted to address these weaknesses and transform the department. For years, GAO has reported that DOD is challenged in its efforts to effect fundamental financial and business management reform and GAO's ongoing work continues to raise serious questions about DOD's chances of success. Congress asked GAO to provide information on the (1) pervasive long-standing financial and business management weaknesses that affect DOD's efficiency, (2) cost of and control over the department's business systems investments, and (3) legislative actions needed to enhance the success of DOD's business transformation efforts. Overhauling the financial management and business operations of one of the largest and most complex organizations in the world represents a daunting challenge. Eight DOD program areas, representing key business functions, are on GAO's high-risk list, and the department shares responsibility for six other governmentwide high-risk areas, meaning that DOD is fully or partially responsible for 14 of the 25 high-risk areas in the federal government. DOD's substantial financial and business management weaknesses adversely affect not only its ability to produce auditable financial information, but also to provide accurate, complete, and timely information for management and Congress to use in making informed decisions. Further, the lack of adequate accountability across all of DOD's major business areas results in billions of dollars in annual wasted resources in a time of increasing fiscal constraint and has a negative impact on mission performance. The department has recently taken several steps to address provisions of the fiscal year 2005 defense authorization act which are aimed at improving DOD's business systems management practices. For example, DOD has established the Defense Business Systems Management Committee to oversee its business systems modernization efforts. However, DOD's overall transformation efforts have not adequately addressed the key causes of past reform failures. Lessons learned from these previous reform attempts include the need for sustained leadership at the highest level and a strategic and integrated plan. The seriousness of DOD's weaknesses underscores the importance of no longer condoning the "status quo." To improve the likelihood that DOD's transformation efforts will succeed, GAO proposes that business systems funding be appropriated to the approval authorities responsible for business systems investments. Additionally, GAO suggests that a senior management position be established to provide sustained leadership for DOD's overall business transformation. Absent this unified responsibility, authority, accountability, and control of funding, DOD's transformation efforts are likely to fail.
The Mineral Leasing Act of 1920 charges Interior with overseeing oil and gas leasing on federal lands and private lands where the federal government has retained mineral rights covering about 700 million onshore acres. Offshore, the Outer Continental Shelf Lands Act, as amended, gives Interior the responsibility for leasing and managing approximately 1.76 billion acres. BLM and BOEMRE are responsible for issuing permits for oil and gas drilling; establishing guidelines for measuring oil and gas production; conducting production inspections; and generally providing oversight for ensuring that oil and gas companies comply with applicable laws, regulations, and department policies. This oversight includes the authority to ensure that firms produce oil and gas in a manner that minimizes any waste of these resources. Together, BLM and BOEMRE are responsible for oversight of oil and gas operations on more than 28,000 producible leases. Interior’s MRM program, which is managed under BOEMRE, is charged with ensuring that the federal government receives royalties from the operators that produce oil and gas from both onshore and offshore federal leases. MRM is responsible for collecting royalties on all of the oil and gas produced, with some allowances for gas lost during production. Companies pay royalties to MRM based on a percentage of the cash value of the oil and gas produced and sold. Currently, royalty rates for onshore leases are generally 12.5 percent, while rates for offshore leases range from 12.5 percent to 18.75 percent. The production of oil and gas on these federal leases involves several stages, including the initial drilling of the well; clearing out liquid and mud from the wellbore; production of oil and gas from the well; separation of oil, gas, and other liquids; transfer of oil and gas to storage tanks; and distribution to central processing facilities. Throughout this process, operators typically vent or flare some natural gas, often intermittently in response to maintenance needs or equipment failures. This intermittent venting may take place when operators purge water or hydrocarbon liquids that collect in well bores (liquid unloading) to maintain proper well function or when they expel liquids and mud with pressurized natural gas after drilling during the well completion process. BLM and BOEMRE permit operators of wells to release routine amounts of gas during the course of production without notifying them or incurring royalties on this gas. In addition, production equipment often emits gas to maintain proper internal pressure, or in some cases, the release of pressurized gas itself is the power source for the equipment, particularly in remote areas that are not linked to an electrical grid. This “operational” venting may include the continuous releases of gas from pneumatic devices––valves that control gas flows, levels, temperatures, and pressures in the equipment and rely on pressurized gas for operation––as well as leaks, or “fugitive” emissions. It also includes natural gas that vaporizes from oi condensate storage tanks or during the normal operation of natur al gas l or dehydration equipment. Until recently, the industry considered these operational losses to be small, but recent infrared camera technology ha shed new light on these sources of vented gas, particularly from condensate storage tanks. According to oil and gas industry representatives, the cameras helped reveal that losses from storage tanks and fugitive emissions were much higher than they originally thought ( to video). In addition, recent calculations from EPA suggest that emissions from completions and liquid unloading make larger contributions to lost gas than previously thought possible. Operators can use a number of techniques to estimate emissions based on gas and oil characteristics and well operating conditions, such as temperature and pressure, without taking direct measurements of escaping gas. While venting and flaring of natural gas is often a necessary part of production, the lost gas has both economic and environmental implications. On federal oil and gas leases, natural gas that is vented or flared during production, instead of captured for sale, represents a loss of royalty revenue for the federal government. Venting and flaring natural gas also adds to greenhouse gases in the atmosphere. In general, flaring emits CO, while venting releases methane, both of which the scientific community agrees are contributing to global warming. Methane is considered particularly harmful in this respect, as it is roughly 25 times more potent by weight than CO Other hydrocarbons and compounds in vented and flared gas can also harm air quality by increasing ground-level ozone levels and contributing to regional haze. Volatile organic compounds, present in vented gas, are contributors to elevated ozone and haze, and ozone is a known carcinogen, according to EPA analysis. In some areas in the western United States, the oil and gas industry is a major source of volatile organic compounds. According to EPA, in many western states, including in many rural areas where there is substantial oil and gas production and limited population, there have been increases in ozone levels, often exceeding federal air quality limits. Interior is required to conduct environmental impact assessments in advance of oil and gas leasing and generally works with state environmental and air quality agencies to ensure that oil and gas producers will comply with environmental laws such as the Clean Air Act or Clean Water Act and the related implementing regulations. However, the state agencies may be charged with maintaining the standards established by the federal government in law and regulation, and often have primary responsibility in this regard. While much of the natural gas that is vented and flared is considered to be unavoidably lost, certain technologies and practices can be applied throughout the production process to capture some of this gas according to the oil and gas industry and EPA. The technologies’ technical and economic feasibility varies and sometimes depends on the characteristics of the production site. For example, some technologies require a substantial amount of electricity, which may be less feasible for remote production sites that are not on the electrical grid. However, certain technologies are generally considered technically and economically feasible at particular production stages, including the following: Drilling: Using “reduced emission” completion equipment when cleaning out a well before production, which separates mud and debris to capture gas or condensate that might otherwise be vented or flared. Production: Installing a plunger lift system to facilitate liquid unloading. Plunger-lift systems drop a plunger to the bottom of the well, and when the built-up gas pressure pushes the plunger to the surface, liquids come with it. Most of the accompanying gas goes into the gas line rather than being vented. Computerized timers adjust when the plunger is dropped according to the rate at which liquid collects in the well, further decreasing venting. Storage: Installing vapor recovery units that capture gas vapor from oil or condensate storage tanks and send it into the pipeline. Dehydration: Optimizing the circulation rate of the glycol and adding a flash tank separator that reduces the amount of gas that is vented into the atmosphere. Pneumatic devices: Replacing pneumatic devices at all stages of production that release, or “bleed,” gas at a high rate (high-bleed pneumatics) with devices that bleed gas at a lower rate (low-bleed pneumatics). In 2004, we reported that information on the extent to which venting and flaring occurs was limited. Although BLM and BOEMRE require operators to report data on venting and flaring on a monthly basis, our 2004 report found that these data did not distinguish between gas that is vented and gas that is flared, making it difficult to accurately identify the extent to which each occurs. In implementing our recommendations for offshore operators, BOEMRE now requires operators to report venting and flaring separately and to install meters to measure this gas on larger platforms. The Energy Information Administration (EIA) also collects data from oil and gas producing states on venting and flaring, but our 2004 work found that EIA did not consider these state-reported data to be consistent and, according to discussions with EIA officials, these data have not improved. Available estimates of vented and flared natural gas on federal leases vary considerably, and we found that estimates based on data from MRM’s OGOR data system likely underestimate these volumes because they include fewer sources of emissions than other estimates, including EPA’s and WRAP’s. For onshore federal leases, operators reported to OGOR that about 0.13 percent of the natural gas produced was vented and flared, while EPA estimates showed the volume to be about 4.2 percent, and estimates based on WRAP data showed it to be as high as 5 percent. Similarly, for offshore federal leases, operators reported to OGOR that 0.5 percent of the natural gas produced was vented and flared, while data in BOEMRE’s GOADS system––a database that focuses on the impacts of offshore oil and gas exploration, development, and production on air quality in the Gulf of Mexico region––showed that volume to be about 1.4 percent, and estimates from EPA showed it to be about 2.3 percent. Onshore leases. Onshore leases showed the largest variation between OGOR data and others’ estimates of natural gas venting and flaring. Operators reported to MRM’s OGOR system that about 0.13 percent of the natural gas produced on onshore federal leases was vented or flared each year between 2006 and 2008. BLM uses guidance from 1980, which sets limits on the amount of natural gas that may be vented and flared on onshore leases, requires operators to report vented and flared gas to OGOR, and in some cases to seek permission before releasing gas. Although the guidance states that onshore operators must report all volumes of lost gas to OGOR, it does not enumerate the sources that should be reported or specify how they should be estimated. Staff from BLM told us that the reported volumes were from intermittent events like completions, liquid unloading, or necessary releases after equipment failures; however, operators did not report operational sources such as venting from oil storage tanks, pneumatic valves, or glycol dehydrators. In general, BLM staff said that they thought that vented and flared gas did not represent a significant loss of gas on federal leases. In addition, we found a lack of consistency across BLM field offices regarding their understanding of which intermittent volumes of lost gas should to be reported to OGOR. For example, staff from some of the offices said that they thought that intermittent vented and flared gas was not to be reported if operators had advance permission or where volumes were under BLM’s permissible limits, while others said that they thought that operators still needed to report this gas. Our discussions with operators reflected this lack of consistency from BLM field office staff. Operators we spoke with said that they generally did not report operational sources, and in some cases did not report intermittent sources as long as they were under BLM’s permissible limits for venting and flaring. In contrast, EPA’s estimate of venting and flaring was approximately 4.2 percent of gas production on onshore federal leases for the same period and consistently included both intermittent and operational sources. EPA estimated these emissions using data on average nationwide oil and gas production equipment and their associated emissions (see table 1). As noted earlier, venting from operational sources had not previously been seen as a significant contributor to lost gas. With these additional sources, EPA’s estimates are around 30 times higher than the volumes operators reported to OGOR. According to EPA’s estimates, the amount of natural gas vented and flared on onshore leases totaled around 126 billion cubic feet (Bcf) of gas in 2008. This amount is roughly equivalent to the natural gas needed to heat about 1.7 million homes during a year, according to our calculations. See figure 2 for a comparison between EPA’s estimated gas emissions and the volumes reported to OGOR as a percentage of gas production on federal onshore leases. Similarly, analysis of WRAP data for five production basins in the mountain west in 2006 indicated as much as 5 percent of the total natural gas produced on federal leases was vented and flared. WRAP based its estimates, in part, on a survey of the types of equipment operators were using, and provided a detailed list of sources to be reported. WRAP’s data included similar sources as EPA’s data, as well as estimates of emissions from fugitive sources like leaking seals and valves. Although estimates based on WRAP data varied from basin to basin—between 0.3 and 5 percent—they were consistently much higher than the volumes operators reported to OGOR. The average vented and flared gas as a percentage of production was 2.2 percent across the five basins. See table 2 for a list of the key sources in one of the five basins. In figure 3, which compares estimates based on WRAP data with the volumes operators reported to OGOR for 2006, for the Uinta basin, the WRAP estimate was about 20 times higher than the volumes reported to OGOR, and for two other basins (i.e., Denver-Julesburg and N. San Juan) no volumes of vented and flared gas were reported to OGOR. Offshore leases. Offshore leases showed less variation between OGOR data and others’ estimates of natural gas venting and flaring than onshore leases, but the volumes that operators reported to MRM’s OGOR were still much lower than the volumes they reported to BOEMRE’s GOADS system and estimates from EPA. Operators reported to OGOR that between 0.3 and 0.5 percent of the natural gas produced on offshore leases was vented and flared each year from 2006 to 2008; however, they reported to GOADS that they vented and flared about 1.4 percent—about 32 Bcf––of the natural gas produced on federal leases in the Gulf of Mexico in 2008. Although regulations require offshore operators to report all sources of lost gas to OGOR, BOEMRE officials said that that this did not include fugitive emissions. Furthermore, these officials also said that operators likely reported volumes from some operational sources as “lease-use” gas instead of including it in the venting and flaring data, thus contributing to the differences between OGOR and GOADS. GOADS data included sources similar to those included in EPA’s and WRAP’s data for onshore production, including the same operational sources. Further, guidance to operators for reporting to GOADS explicitly outlines the sources to be reported and how they should be estimated, while guidance for OGOR does not. Table 3 outlines the emission sources for volumes operators reported to the GOADS system for 2008. In addition, EPA’s offshore estimates showed that around 2.3 percent of gas produced on offshore federal leases––as much as 50 Bcf––was vented and flared every year from 2006 to 2008. According to our analysis of EPA’s work, additional venting from natural gas compressors, used to maintain proper pressure in production equipment, accounted for the majority of the difference between the offshore EPA and GOADS volumes. On several occasions BOEMRE has made comparisons between data on vented and flared volumes in the OGOR and GOADS systems, according to BOEMRE officials. In 2004, BOEMRE compared data from the 2000 GOADS study with data from OGOR for a subset of offshore leases and found reported vented and flared volumes were not always in agreement— attributing this difference to different operator interpretations of GOADS and OGOR reporting requirements. BOEMRE officials said they revised reporting procedures for the 2005 GOADS study. More recently, BOEMRE made similar comparisons between data from the 2008 GOADS study and OGOR data for a subset of leases and found they were in closer agreement. BOEMRE officials told us they will continue to make such comparisons to try to ensure the accuracy of the data in each system. In reporting volumes of vented and flared gas to both systems, operators can choose from a broad array of software packages, models, and equations to estimate emissions, and these techniques can yield widely varied results. For example, one study found that various estimation techniques to determine emissions from oil storage tanks either consistently underestimated or overestimated vented volumes. OGOR reporting instructions for both onshore and offshore operators, as noted, do not specify how operators should estimate these volumes. As part of our review, we analyzed 2008 OGOR and GOADS data for the Gulf of Mexico and found that the OGOR data likely underestimated the volumes of vented and flared natural gas on federal offshore leases. To do this analysis, we compared 2008 data from GOADS’s vent and flare source categories with OGOR data for the same categories—looking at these source categories allowed us to directly compare the two data systems. In doing this analysis, we accounted for OGOR’s exclusion of fugitive emissions and the reporting of sources, like pneumatic valves, as lease-use gas. Our analysis found that the volumes operators reported to OGOR–– about 12 Bcf––were much lower than the volumes operators reported to GOADS—about 18 Bcf. Neither we nor MRM and BOEMRE officials could account for or explain these differences in the two data systems. BOEMRE officials said that they are still working to improve reporting to OGOR and GOADS and expect these two data systems to converge in the future. To improve reported data, BOEMRE recently released a final rule, in response to the recommendations in our 2004 report, that requires operators on larger offshore platforms to route vented and flared gas from a variety of sources through a meter to allow for more accurate measurement, among other things. BOEMRE officials said that these meters would help to improve the accuracy of data reported to both OGOR and GOADS. However, BOEMRE officials said they have had to address questions from some operators who were not sure which sources of vented gas should be routed through the newly required meters. In this regard, these officials said it may be useful to enumerate the required emission sources for reporting to OGOR in future guidance to offshore operators. They also noted that BOEMRE is planning a workshop in October 2011 to stress to operators the need for accurate reporting on their submissions to both GOADS and OGOR systems. In a similar way, EPA has taken action to improve the reporting of emissions from the oil and natural gas industry. EPA recently proposed a greenhouse gas reporting rule that would require oil and gas producers emitting over 25,000 metric tons of carbon dioxide equivalent to submit detailed data on vented and flared gas volumes to allow EPA to better understand the contribution of venting and flaring to national greenhouse gas emissions. For onshore leases, the proposed EPA rule provides details on the specific sources of vented and flared gas to be measured and proposes standardized methods for estimating volumes of greenhouse gas emissions where direct measurements are not possible. For offshore leases, operators would use the GOADS system to report venting and flaring. Data collection would begin in 2011 if the rule becomes finalized in 2010. Data from EPA, supported by information obtained from technology vendors and our analysis of WRAP data, suggest that about 40 percent of natural gas estimated to be vented and flared on federal onshore leases could be economically captured with currently available control technologies, although some barriers to their increased use exist. Such captures could increase federal royalty payments and reduce greenhouse gas emissions. Available technologies could reduce venting and flaring at many stages of the production process. However, there are some barriers to implementing these technologies. EPA analysis and our analysis of WRAP data identified opportunities for expanded use of technologies to reduce venting and flaring. Specifically: EPA’s 2008 analysis, the most recent data available, indicates that the increased use of available technologies, including technologies that capture emissions from sources such as well completions, liquid unloading, or venting from pneumatic devices, could have captured about 40 percent––around 50 Bcf––of the natural gas EPA estimated was lost from onshore federal leases nationwide. nd significant opportunities to add “smart” automation to existing plunger lifts, which tune plunger lifts to maximum efficiency and, in turn, minimize the amount of gas lost to venting. EPA estimated that using this technology where economically feasible could have resulted in the capture of more than 7 Bcf of vented and flared natural gas on federal leases in 2008––around 6 percent of the total volume estimated by EPA to be vented and flared on onshore federal leases. Similarly, EPA estimated that additional wells on onshore federal leases could have incorporated reduced emission completion technologies in 2008, which could have captured an additional 14.7 Bcf of vented and flared natural gas. Table 4 outlines EPA’s estimates of potential reductions in venting and flaring on onshore federal leases. Reductions in natural gas lost to venting and flaring from federal leases would increase the volume of natural gas produced and sold, thereby potentially increasing federal royalty payments. If, for instance, a total of 126 Bcf of natural gas was lost to venting and flaring on onshore federal leases in 2008, as EPA has estimated, that loss would equal approximately $58 million in federal royalty payments. If, as EPA estimates, 40 percent of this lost gas could have been economically captured and sold, federal royalty payments could increase by approximately $23 million annually, which represents about 1.8 percent of annual federal royalty payments on natural gas. Reducing natural gas lost to venting and flaring from federal leases could also reduce greenhouse gases to the atmosphere according to our calculations. Because methane is about 25 times more potent as a greenhouse gas over a 100-year period, and almost 72 times more potent over a 20-year period according to the Intergovernmental Panel on Climate Change, reducing direct venting of natural gas to the atmosphere has a significantly greater positive effect, in terms of global warming potential, than does reducing flaring. Again using EPA’s estimates, if a total of 98 Bcf of natural gas was vented and 28 Bcf was flared annually, those releases would account for about 41 million metric tons of carbon dioxide equivalent released to the atmosphere, which would be roughly equivalent to the emissions of almost 8 million passenger vehicles or about 10 average-sized coal-fired power plants. Capturing 40 percent of this volume would result in emissions reductions of about 50 Bcf, which is equivalent to the emissions of 3.1 million passenger vehicles or about 4 average-sized coal-fired power plants, according to our analysis. Some EPA officials also told us that they believed that federal efforts to reduce venting and flaring could also have a spillover effect––that is, it could lead operators to use these technologies on state and private leases as well. Data from EPA and WRAP included vented and flared gas from nonfederal leases, and the data showed that there were similar percentages of gas being lost, suggesting that the potential greenhouse gas reductions from the expanded use of these technologies could go well beyond those from federal oil and gas production. We did not find complete quantitative data on reduction opportunities offshore from Interior, EPA, or others that could be used to fully identify the potential to reduce emissions offshore. However, EPA officials told us that opportunities for reducing emissions from venting and flaring from offshore production platforms likely exist. For instance, EPA found that various production components, including valves and compressor seals, contribute significant volumes of fugitive emissions, but that these emissions could be mitigated through equipment repair or retrofitting. One estimate based on EPA analysis of 15 offshore platforms in 2008, suggests that most of the gas lost through compressor seals could be recovered economically—saving about 70 percent of the overall gas they estimated to be lost on those platforms. However, EPA’s analysis warns that some mitigation strategies may be less cost-effective in the offshore environment because capital costs and installation costs tend to be higher. Interior is responsible for ensuring that operators minimize natural gas venting and flaring on federal onshore and offshore leases; however, while both BLM and BOEMRE have taken steps to minimize venting and flaring on federal leases, their oversight of such leases has several limitations. Although EPA does not have a direct regulatory role with respect to managing federal oil and gas leases, its Natural Gas STAR program has helped to reduce vented gas on federal leases according to EPA and industry participants. As part of their oversight responsibilities, Interior’s BLM and BOEMRE are charged with minimizing the waste of federal resources, and, to that end, both agencies have issued regulations and guidance that limit venting and flaring of gas during routine procedures such as liquid unloading and well completions. However, their oversight has several limitations, namely (1) the regulations and guidance do not address new capture technologies or all sources of lost gas; (2) the agencies do not assess options for reducing venting and flaring in advance of oil and gas production for purposes other than addressing air quality; and (3) the agencies have not developed or do not use information regarding available technologies that could reduce venting and flaring. Onshore leases. BLM’s guidance limits venting and flaring from routine procedures and requires operators to request permission to vent and flare gas above these limits. If operators request permission to exceed these limits, BLM is to assess the economic and technical viability of capturing additional gas and require its capture when warranted. Although BLM guidance sets limits on venting and flaring of natural gas and allows flexibility to exceed them in certain cases, it does not address newer technologies or all sources of lost gas. Specifically, BLM guidance is 30 years old and therefore does not address venting and flaring reduction technologies that have advanced since it was issued. For example, since the guidance was written, technologies have been developed to economically reduce emissions from well completions and liquid unloading—namely the use of reduced emission completion and automated plunger lift technologies respectively. These two sources of emissions were important contributors to vented and flared volumes that we discussed earlier. Despite this fact, the use of such technologies where it is economic to do so is not covered in BLM’s current guidance. In general, BLM officials said that they thought the industry would use venting and flaring reduction technologies if they made economic sense. Similarly, new lower-emission devices could also reduce venting and flaring from other sources of emissions that are not covered by BLM’s guidance, such as pneumatic valves or gas dehydrators––two sources that contribute to significant lost gas. In discussions with BLM staff about their guidance, staff acknowledged that existing guidance was outdated given current technologies and said that they were planning to update it by the second quarter of 2012. Offshore leases. Like BLM, BOEMRE has regulations that limit the allowable volumes of vented and flared gas from offshore leases to minimize losses of gas from routine operations. Operators can also apply for permission to exceed these limits and, like BLM, BOEMRE would evaluate the economic and technical viability of capturing additional gas. Further, BOEMRE inspects offshore platform facilities each year and, as part of these inspections, reviews on-site daily natural gas venting records. BOEMRE officials told us that the agency requires operators to keep these venting records and that it uses them to, among other things, identify any economically viable opportunities for an operator to install control equipment. Overall BOEMRE officials said that operators were required to install venting and flaring reduction equipment where economic, even if they would make as little as $1 in net profit from the captured gas. According to agency officials, due to the type of production and operations offshore, reduction opportunities mostly consist of installing vapor recovery units, and these officials said that they generally believe that companies have installed such equipment where it is economic to do so. Although BOEMRE conducts regular inspections, the daily venting records do not include all sources of vented gas. For example, emission estimates from sources of gas such as pneumatic valves and glycol dehydrators are not included, and therefore inspectors are not able to make assessments of the potential to reduce emissions from these sources. Both of these sources were contributors to lost gas offshore from the 2008 GOADS study, suggesting potential reduction opportunities. BOEMRE officials said that the agency considers these sources lease-use gas, and as a result, believed that they could not legally consider the economic and technical viability of this gas and require its capture when warranted. However, based on our review of BOEMRE regulations and authorizing legislation, it appears that BOEMRE has the authority to require operators to minimize the loss of this gas, including requiring its capture where appropriate. BOEMRE officials agreed with our assessment. Onshore leases. While BLM regulations authorize and direct BLM officials to offer technical advice and issue orders for specific lease operations to minimize waste, BLM does not explicitly assess options to minimize waste from vented and flared gas before production. For example, we identified two phases in advance of production where BLM could assess venting and flaring reduction options—during the environmental review phase and when the operator applies to drill a new well. However, the agency does not explicitly assess these options, or discuss them with operators, during either phase. For example, during the environmental review phase, BLM works with states to assess emissions from oil and gas production, and that air quality assessment may include venting and flaring reduction requirements. According to BLM officials, since states generally have primary responsibility to implement and enforce air quality standards, the standards drive these requirements, and states focus only on the role venting and flaring plays in air pollution, rather than the minimization of waste. Therefore in production basins where air quality standards are being met, or where only minimal use of technology is required to meet them, BLM would not assess venting and flaring reduction technologies to the full extent that they could economically reduce vented and flared gas. One official noted that some BLM officials felt constrained in their ability to consider the use of venting and flaring reduction technologies because of this. Similarly, during the phase when operators apply to drill new wells, BLM assesses detailed technical and environmental aspects of the project, but BLM officials told us their assessment does not include a review of options to reduce venting and flaring. Offshore leases. Similar to BLM, BOEMRE assesses venting and flaring reduction options in advance of production to determine whether vented and flared gas from offshore platforms would harm coastal air quality, but again, the focus is on meeting air quality standards rather than assessing whether gas can be economically captured. Therefore, when BOEMRE does not anticipate harm to coastal air quality, as is often the case according to officials, the agency does not further consider venting and flaring reduction options at this phase. Further, while the application operators submit in advance of drilling must include a description of the technologies and recovery practices that the operator will use during production, venting and flaring reduction options are not included in that submission. Onshore leases. We found that BLM does not maintain a database regarding the extent to which available venting and flaring reduction technologies are used on federal oil and gas leases. As such, it could be difficult for BLM to identify opportunities to reduce venting and flaring or estimate the potential to increase the capture of gas that is currently vented or flared. For example, while BLM guidance provides that the natural gas vaporizing from storage tanks must be captured if BLM determines recovery is warranted, BLM does not collect data on the use of control technologies and available OGOR data do not contain the volumes of lost gas from storage tanks. Thus BLM may be overlooking circumstances where recovery could be warranted. In addition, according to BLM officials we spoke with, although infrared cameras can be used to identify sources of lost gas, BLM has not used them during inspections of production facilities. Although relatively expensive, infrared cameras allow users to rapidly scan and detect vented gas or leaks across wide production areas. BLM officials cited budgetary constraints and challenges in developing a policy and protocols for why the cameras have not been used regularly by the agency. Offshore leases. Although the GOADS data system contains some information on the types of equipment operators use, BOEMRE has not analyzed this information to identify emission-reduction opportunities according to officials. GOADS contains information about the use of equipment such as vapor recovery systems. These data have not been used by BOEMRE to identify venting and flaring reduction opportunities because the agency has not considered using these data for purposes other than addressing air quality, according to a BOEMRE official. Nonetheless, based on our review of the GOADS data system, by not analyzing such data, BOEMRE is not able to identify emission-reduction opportunities. As a case in point, we found that emissions from pneumatic valves in the 2008 GOADS study made noticeable contributions to overall lost gas, which might suggest the potential to expand the use of low-bleed pneumatics in some cases. BOEMRE officials also noted that, unlike BLM, its inspectors had used infrared cameras to look for obvious sources of vented and flared gas in a few sample locations close to shore. In this regard, they said expanded use of infrared cameras could be useful to help enforce their new rule that requires the use of meters for vented and flared gas. Specifically, they said that the cameras could identify sources of gas that operators may have not routed through the meter as required. They also noted that expanded use of the cameras could help to identify and potentially reduce fugitive gas emissions that currently go undetected. Although Interior has the primary role in federal oil and gas leasing, EPA’s Natural Gas STAR program has encouraged some operators to adopt technologies and practices that have helped to reduce methane emissions from the venting of natural gas, according to EPA and industry participants. Through this program, industry partners evaluate their emissions and consider ways to reduce them, although the reductions are voluntary. The program also maintains an online library of technologies and practices to reduce emissions that quantify the costs and benefits of each emission-reduction option. Natural Gas STAR also sponsors conferences to facilitate information exchange between operators regarding emissions reductions technologies. Partner companies report annually about their efforts to reduce emissions along with the volumes of the emission reductions. According to the Natural Gas STAR Web site, domestic oil and gas industry partners reported more than 114 Bcf of methane emission reductions in 2008, which amounts to about 0.4 percent of the total natural gas produced that year. However, one industry representative said that, while large and midsize operators were aware of the Natural Gas STAR program, smaller operators were not aware and, even if some smaller operators were aware of the program, they may not have the environmental staff to implement the technologies and practices. Despite the potential usefulness of information from the Natural Gas STAR program to oil and gas producers on federal leases, some of the BLM officials that we spoke with were unfamiliar with Natural Gas STAR. Fulfilling its responsibility to ensure that the country’s oil and natural gas assets are developed reasonably and result in fair compensation for the American people requires Interior to have accurate and complete information on all aspects of oil and natural gas leases. Interior has collected some information on vented and flared gas through MRM’s OGOR system, but without a full understanding of these losses Interior cannot fully account for the disposition of taxpayer resources or identify opportunities to prevent undue waste. MRM’s OGOR data system does not provide information on all sources of lost gas, which is the primary source of data that BLM uses to measure overall vented and flared gas onshore. Therefore, OGOR data present an incomplete picture of venting and flaring onshore, leading BLM officials to believe that vented and flared gas volumes do not represent a significant loss of gas on federal leases. Similarly, data in BOEMRE’s GOADS data system differ considerably from data in OGOR, and have not been reconciled—raising questions about the accuracy of offshore data sources. Regarding Interior’s oversight of operators venting and flaring gas, because current guidance and regulations from BLM and BOEMRE do not require the minimization of all sources of vented and flared gas––although legislation exists authorizing them to require that waste on federal leases be minimized––operators may be venting and flaring more gas than should otherwise be allowed. In fact, we found that operators are not using available technologies in all cases to economically reduce vented and flared gas. BLM guidance has not kept pace with the development of economically viable capture technologies for a number of sources of lost gas, and BOEMRE has been reluctant to consider the economic and technical viability of minimizing the waste of “lease-use” gas because officials had believed they were legally constrained from doing so. In addition to the limitations of these regulations, BLM and BOEMRE have not used their authority in two situations where they could potentially further reduce venting and flaring. First, neither agency has used its authority to minimize waste beyond relevant air quality standards by assessing the use of venting and flaring reduction technologies before production. Second, because BLM lacks data about the use of venting and flaring technologies for onshore leases and BOEMRE does not analyze its existing information for offshore leases in its GOADS data system, these agencies are not fully aware of potential opportunities to use available technologies. Further, neither agency takes full advantage of newer infrared camera technology that can help to identify sources of lost gas— as BOEMRE officials have acknowledged, this technology could help reveal additional sources of lost gas. Ultimately, a sharper focus by BOEMRE and BLM on the nature and extent of venting and flaring on federal leases could have multiple benefits. Specifically, increased implementation of available venting and flaring reduction technologies, to the extent possible, could increase sales volumes and revenues for operators, increase royalty payments to the federal government, and decrease emissions of greenhouse gases. In addition, our analysis of WRAP and EPA data showed as much or more vented and flared gas on nonfederal leases, and we share the observation with EPA officials that a spillover effect may occur, whereby oil and gas producers, seeing successes on their federal leases, take similar steps on state and private leases. To ensure that Interior has a complete picture of venting and flaring on federal leases and takes steps to reduce this lost gas where economic to do so, we are making five recommendations to the Secretary of the Interior. To ensure that Interior’s data are complete and accurate, we recommend that the Secretary of the Interior direct BLM and BOEMRE to take the following action: Take additional steps to ensure that each agency has a complete and accurate picture of vented and flared gas, for both onshore and offshore leases, by (1) BLM developing more complete data on lost gas by taking into consideration additional large onshore sources and ways to estimate them not currently addressed in regulations—sources that EPA’s newly proposed greenhouse gas reporting rule addresses—and (2) BOEMRE reconciling differences in reported offshore venting and flaring volumes in OGOR and GOADS data systems and making adjustments to ensure the accuracy of these systems. To help reduce venting and flaring of gas by addressing limitations in their regulations, we recommend that the Secretary of the Interior direct BLM and BOEMRE to take the following four actions: BLM should revise its guidance to operators to make it clear that technologies should be used where they can economically capture sources of vented and flared gas, including gas from liquid unloading, well completions, pneumatic valves, and glycol dehydrators. BOEMRE should consider extending its requirement that gas be captured where economical to “lease-use” sources of gas; BLM and BOEMRE should assess the potential use of venting and flaring reduction technologies to minimize the waste of natural gas in advance of production where applicable, and not solely for purposes of air quality; BLM and BOEMRE should consider the expanded use of infrared cameras, where economical, to improve reporting of emission sources and to identify opportunities to minimize lost gas; and BLM should collect information on the extent that larger operators use venting and flaring reduction technology and periodically review this information to identify potential opportunities for oil and gas operators to reduce their emissions, and BOEMRE should use existing information in its GOADS data system for this same purpose, to the extent possible. We provided a copy of our draft report to Interior and EPA for review and comment. Interior provided written comments that concurred with four of the five recommendations and partly concurred with the remaining recommendation. Its comments are reproduced in appendix II and key areas are discussed below. EPA did not provide formal comments on the report, but the agency’s Office of Air and Radiation provided written comments to GAO staff, which we summarize and discuss below. Interior and EPA also provided other clarifying or technical comments, which we incorporated as appropriate. Interior’s comments reflected the views of BLM and BOEMRE. BLM concurred with all five recommendations and noted that it plans to incorporate recommended actions into its new Onshore Order in order to improve the completeness and accuracy of its data and help address limitations in its current regulations. BOEMRE concurred with four of the recommendations and partly concurred with our second recommendation that they consider enforcing the economical capture of “lease-use” gas. It stated that we misapprehended the scope of the regulations governing “lease-use” sources of gas in that BOEMRE does not have current regulations to require the capture of “lease-use” gas. In response to this comment, we reworded our recommendation to clarify that BOEMRE should consider extending its existing requirements for the economical capture of gas to “lease-use” gas. In a related point, BOEMRE also noted that we were unable to quantify the potential volumes of additional gas that could be captured by holding operators to this same economic standard for “lease- use” gas. While current data have limitations, BOEMRE’s GOADS data suggest potential opportunities to capture additional gas from lease-use sources, namely glycol dehydrators and pneumatic devices. As such, we support BOEMRE’s efforts to further evaluate this issue and take action through new guidance or regulations, as it believes appropriate. EPA’s Office of Air and Radiation commented on three areas of the report: First, EPA emphasized the significant air quality impacts from the volatile organic compounds (VOC) associated with vented gas and provided us with estimates of the potential volumes of these emissions. While we recognize that the impacts of VOC emissions on air quality are important, these impacts were largely beyond the scope of our work. Nonetheless, we incorporated an estimate of these VOC emissions into supporting notes to table 1 that reflected EPA’s estimates of vented and flared gas. We also added additional information to the background regarding VOC emissions. Second, EPA suggested that we recommend to BLM and BOEMRE that they require the use of the best available venting and flaring control measures during leasing or drilling permitting. We continue to believe that BLM and BOEMRE should require the use of these technologies where economical, and recognize that requiring the use of such controls when the economics of capturing gas are unfavorable is not required by current EPA greenhouse gas regulations. Third, EPA provided us with its revised emission estimates for vented and flared gas based on updated analysis for its proposed rule on the reporting of greenhouse gases by industry. It also provided us with revised estimates for the use of additional control technologies to reduce the emissions of vented and flared gas. In both cases, we incorporated these revised estimates in our report where applicable. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, Secretary of the Interior, Administrator of the Environmental Protection Agency, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) examine available estimates of vented and flared natural gas on federal leases; (2) estimate the potential to capture additional vented and flared natural gas with available technologies and the associated potential increases in royalty payments and reductions in greenhouse gas emissions and; (3) assess the federal role in reducing venting and flaring of natural gas. To examine available estimates of vented and flared natural gas on federal leases, we collected data from the Department of the Interior’s (Interior) Bureau of Land Management (BLM), Bureau of Ocean Energy Management, Regulation and Enforcement (BOEMRE), including BOEMRE’s Minerals Revenue Management (MRM) program; the Environmental Protection Agency (EPA); and the Western Regional Air Partnership (WRAP). We also interviewed staff from these agencies and oil and gas producers operating on federal leases regarding venting and flaring data collection, analysis, and reporting. We obtained data from four key sources: MRM’s Oil and Gas Operations Report (OGOR) database, BOEMRE’s Gulfwide Offshore Activity Data System (GOADS), EPA’s Natural Gas STAR Program, and WRAP’s analysis of air emissions for a number of western states. We assessed the quality of the data from each of these sources and determined that these data were sufficiently reliable for the purposes of our report. MRM provided OGOR data on vented and flared volumes and production for both onshore and offshore federal leases for calendar years 2006 to 2008. MRM uses the OGOR data, in part, to ensure accurate federal royalty payments. The OGOR data are operator-reported, and reported venting and flaring volumes are a mix of empirical measurements and estimates from operators. MRM was unable to provide complete estimates of vented and flared gas on all federal leases because a portion of federal leases are managed as part of lease agreements—collections of leases that draw from the same oil or gas reservoir, which may include federal and nonfederal leases. MRM was unable to determine the share of reported vented and flared gas from the federal portion of those lease agreements; it reported venting and flaring from (1) lease agreements that included only federal leases and (2) all lease agreements, which included some nonfederal leases. In this report, we discuss the vented and flared volumes from the agreements that contain only federal leases. As a result, we report vented and flared gas volumes from the OGOR data as a percentage of total production on these leases, rather than as absolute volumes, in order to compare the OGOR estimates to estimates from other data sources. A second source of venting and flaring data was BOEMRE’s 2008 GOADS data, which contained estimates of gas lost to venting and flaring on federal leases in the Gulf of Mexico—which accounted for 98 percent of federal offshore gas production in 2008. BOEMRE collects GOADS data every 3 years and uses these data to estimate the impacts of offshore oil and gas exploration, development, and production on onshore air quality in the Gulf of Mexico region. BOEMRE also uses GOADS as part of an impact analysis required by the National Environmental Policy Act. GOADS data capture specific information on a variety of sources of air pollutants and greenhouse gases resulting from offshore oil production. BOEMRE provided us with actual volumes of natural gas released from the vented and flared source categories. For the other sources, we used the emissions that were reported in GOADS in tons of methane per year, and we converted these to volumes of methane and then to natural gas, assuming a 78.8 percent methane content for natural gas. In the GOADS study, fugitive emissions are estimated by looking at the number of valves and other components on a given production platform and then assuming an average leak rate. BOEMRE’s data contractor performs a series of quality checks on the data after collection. A third source of data on vented and flared volumes was a nationwide analysis performed by officials from EPA’s Natural Gas STAR program, a national, voluntary program that encourages oil and gas companies, through outreach and education, to adopt cost-effective technologies and practices that improve operational efficiencies and reduce methane emissions. EPA’s nationwide venting and flaring volumes were based on publicly available empirical data on national oil and gas production for 2006, 2007, and 2008, combined with knowledge of current industry practices, including usage rates and effectiveness of venting and flaring reduction technologies. For example, EPA used data on the number of well completions per year and data on the average venting per completion to estimate a yearly nationwide total from that source, with similar approaches used for estimating total venting and flaring from other key sources. EPA adjusted its estimates to account for the industry’s efforts to control some venting and flaring emissions. EPA’s analysis was limited in some ways, however. For instance, lacking empirical data on actua l nationwide rates of use of certain control technologies, EPA based its analysis on anecdotal information in some cases. In order to be able to compare these data with the OGOR data, we scaled EPA’s national estimates to federal leases based on the proportion of natural gas production on federal leases over total U.S. natural gas production using data from MRM and the Department of Energy’s Energy Information Administration (EIA). flaring based on BOEMRE’s 2005 GOADS data. EPA officials adjusted volumes reported to GOADS based on publicly available information on current industry practices, including usage rates and effectiveness of venting and flaring reduction technologies. EPA’s initial estimates of venting and flaring were for the methane component of natural gas. These volumes were converted to reflect overall natural gas emissions by assuming, for most sources, an average 78.8 percent methane content for the gas. empirical data from operators in these basins, including drilling and production volume data, as well as data from a survey of operators. This survey asked operators to report actual vented and flared volumes, as well as to provide information on other aspects of their operations, including the emission control technologies they had in place. Similar to the EPA venting and flaring analysis, however, Environ did not have complete data from all operators in each basin and thus estimated some information based on survey data from a subset of operators. In addition, the original WRAP data did not distinguish between federal and nonfederal oil and gas operations, so we provided federal well numbers to Environ so that they well numbers to Environ so that they could identify the federal lease component of vented and flared gas. could identify the federal lease component of vented and flared gas. To estimate the magnitude of potential increases in royalty payments and reductions in greenhouse gas emissions resulting from capturing additional vented and flared gas with available technologies, we had EPA provide us with estimates of the onshore expansion potential of a number of key technologies and associated venting and flaring volume reductions. For simplicity, EPA developed these estimates by focusing on the expansion potential of a subset of technologies considered to provide the largest emission reductions. These estimates may be conservative, however, because they did not incorporate reductions from a number of other potential venting and flaring opportunities catalogued by the Natural Gas STAR program. These estimates were not based entirely on comprehensive usage data collected from the oil and gas industry, but were based, in part, on publicly available evidence collected through years of experience with the oil and gas industry. In addition, circumstances are constantly changing, and more technological innovations are potentially being used as time goes on, so there is some uncertainty in how much lost gas can be captured. We also compared venting and flaring volumes and the types of emission-reduction technologies used in each of the basins from the WRAP data, allowing us to draw conclusions about the impact of different levels of technology on venting and flaring volumes. We did not identify similar data on reduction opportunities offshore. We also interviewed officials from BLM, BOEMRE, EPA, and state agencies, as well as representatives from private industry, including technology vendors and an environmental consultant regarding the expanded use of available technologies to capture additional vented and flared gas. We conducted background research on venting and flaring reduction technologies, including from publicly available EPA Natural Gas STAR case studies. Finally, we obtained royalty information from MRM to calculate the royalty implications of the onshore venting and flaring reductions, and used conversion factors from EPA to calculate the greenhouse gas impacts of the vented and flared natural gas. To assess the federal role in reducing vented and flared gas, we conducted interviews with officials from Interior, EPA, the Department of Energy, state agencies, and members of the oil and gas industry. We also reviewed agency guidance and documentation, other studies related to federal management and oversight of the oil and gas industry, as well as prior GAO work that described limitations in the systems Interior has in place to track oil and gas production on federal leases. We conducted interviews with officials in six BLM field offices (Farmington and Carlsbad in New Mexico; Vernal, Utah; Glenwood Springs, Colorado; Pinedale, Wyoming; and Bakersfield, California) and staff from BLM headquarters. We also interviewed BOEMRE staff in Denver, Colorado, and New Orleans, Louisiana. We conducted this performance audit from July 2009 to October 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Volume (Bcf) Volume (Bcf) In addition to the individuals named above, Daniel Haas (Assistant Director), Michael Kendix, Michael Krafve, Robert Marek, Alison O’Neill, David Reed, Rebecca Sandulli, and Barbara Timmerman made important contributions to this report.
The Department of the Interior (Interior) leases public lands for oil and natural gas development, which generated about $9 billion in royalties in 2009. Some gas produced on these leases cannot be easily captured and is released (vented) directly to the atmosphere or is burned (flared). This vented and flared gas represents potential lost royalties for Interior and contributes to greenhouse gas emissions. GAO was asked to (1) examine available estimates of the vented and flared natural gas on federal leases, (2) estimate the potential to capture additional gas with available technologies and associated potential increases in royalty payments and decreases in greenhouse gas emissions, and (3) assess the federal role in reducing venting and flaring. In addressing these objectives, GAO analyzed data from Interior, the Environmental Protection Agency (EPA), and others and interviewed agency and industry officials. Estimates of vented and flared natural gas for federal leases vary considerably, and GAO found that data collected by Interior to track venting and flaring on federal leases likely underestimate venting and flaring because they do not account for all sources of lost gas. For onshore federal leases, operators reported to Interior that about 0.13 percent of produced gas was vented or flared. Estimates from EPA and the Western Regional Air Partnership (WRAP) showed volumes as high as 30 times higher. Similarly, for offshore federal leases, operators reported that 0.5 percent of the natural gas produced was vented and flared, while data from an Interior offshore air quality study showed that volume to be about 1.4 percent, and estimates from EPA showed it to be about 2.3 percent. GAO found that the volumes operators reported to Interior do not fully account for some ongoing losses such as the emissions from gas dehydration equipment or from thousands of valves--key sources in the EPA, WRAP, and Interior offshore air quality studies. Data from EPA, supported by information obtained from technology vendors and GAO analysis, suggest that around 40 percent of natural gas estimated to be vented and flared on onshore federal leases could be economically captured with currently available control technologies. According to GAO analysis, such reductions could increase federal royalty payments by about $23 million annually and reduce greenhouse gas emissions by an amount equivalent to about 16.5 million metric tons of CO2--the annual emissions equivalent of 3.1 million cars. Venting and flaring reductions are also possible offshore, but data were not available for GAO to develop a complete estimate. As part of its oversight responsibilities, Interior is charged with minimizing vented and flared gas on federal leases. To minimize lost gas, Interior has issued regulations and guidance that limit venting and flaring during routine procedures. However, Interior's oversight efforts to minimize these losses have several limitations, including that its regulations and guidance do not address some significant sources of lost gas, despite available control technologies to potentially reduce them. Although EPA does not have a role in managing federal leases, it has voluntarily collaborated with the oil and gas industry through its Natural Gas STAR program, which encourages oil and gas producers to use gas saving technology, and through which operators reported venting reductions totaling about 0.4 percent of natural gas production in 2008. To reduce lost gas, increase royalties, and reduce greenhouse gas emissions, GAO recommends that Interior improve its venting and flaring data and address limitations in its regulations and guidance. Interior generally concurred with these recommendations.
Since the 1960s, geostationary satellites have been used by the United States to provide meteorological data for weather observation, research, and forecasting. NOAA’s National Environmental Satellite, Data, and Information Service is responsible for managing the civilian operational geostationary satellite system, called GOES. Geostationary satellites can maintain a constant view of the earth from a high orbit of about 22,300 miles in space. NOAA operates GOES as a two-satellite system that is primarily focused on the United States (see fig. 1). These satellites provide timely environmental data about the earth’s atmosphere, surface, cloud cover, and the space environment to meteorologists and their audiences. They also observe the development of hazardous weather, such as hurricanes and severe thunderstorms, and track their movement and intensity to reduce or avoid major losses of property and life. The ability of the satellites to provide broad, continuously updated coverage of atmospheric conditions over land and oceans is important to NOAA’s weather forecasting operations. To provide continuous satellite coverage, NOAA acquires several satellites at a time as part of a series and launches new satellites every few years (see table 1).satellites and one backup satellite in orbit at all times. While the GOES-R program has made progress in completing its early design and is nearing the end of the design phase for its flight and ground system components, it completed key design milestones later than planned. Recent technical problems with the instruments and spacecraft, as well as a significant modification to the ground project’s development plan, have delayed the completion of key reviews and led to increased complexity for GOES-R’s development. Moreover, several instrument, spacecraft, and ground system problems identified during design, have not yet been resolved. In addition to the delays, the technical and programmatic challenges experienced by GOES-R’s flight and ground projects have led to increased costs for its development contracts. Despite these problems, program officials report that planned launch dates and cost estimates for the first two satellites have not changed, and that approximately $1.2 billion is currently in reserve to manage future delays and cost growth. However, the program has used approximately 30 percent of its reserves over the last 3 years and significant portions of development remain for major components—including the spacecraft and Core Ground System. In addition, the program did not change its reserves when it restored two satellites adding approximately $3.2 billion to the program’s baseline. As a result, the program will be challenged in completing its remaining development, particularly the final design and testing of the spacecraft and ground system, within its cost and schedule targets. Two key types of development milestones, which are identified in the GOES-R December 2007 management control plan, are the preliminary design review (PDR) and a more detailed critical design review (CDR). The PDR is an initial design milestone that assesses the readiness of the program to proceed with detailed design activities and the CDR is intended to demonstrate that the design is complete and appropriate to support proceeding to full-scale software development, as well as fabrication, assembly, integration, and testing. The program planned to complete the flight project’s PDR in April 2010 and CDR in April 2011. It planned to complete the ground project’s PDR in July 2010 and CDR in July 2011. These project-level reviews are to be completed before the program’s comprehensive PDR and CDR can be completed. The program has demonstrated progress toward completing its design. Specifically, the program and its subsequent projects have successfully completed their PDRs, demonstrating that they are ready to proceed with detailed design activities. The program and its projects are also currently progressing towards the final design for the entire GOES-R system, which is expected to be completed at the program’s CDR planned for August 2012. However, the program’s PDR milestones were completed later than their 2007 planned dates. Consequently, its CDR milestones were similarly delayed. Table 4 highlights delays in key program milestones as of April 2012. Going forward, assembly and testing of all flight instruments for the first satellite in the series is to be completed by August 2013. Both the flight and ground projects’ development components are expected to be complete by September 2015. Despite recent delays in program milestones, NOAA still expects to meet an October 2015 launch date for the first satellite in the series by utilizing planned schedule reserves. The Flight Project Office has made progress by completing the critical design reviews for each of its five main instruments, which is significant because the instrument designs will be applied to each satellite in the GOES-R series. The Geostationary Lightning Mapper was the most recent instrument to complete its critical design review, which occurred in August 2011—16 months after its planned completion date. Instrument fabrication, assembly, and testing activities are under way. The final instrument scheduled for completion—the Geostationary Lightning Mapper—is expected to be delivered for integration with the spacecraft by August 2013. Although instrument design milestones are complete and assembly and testing activities are under way, each of the instruments and the spacecraft has recently encountered technical challenges. The Flight Project Office has taken steps to resolve several technical problems through additional engineering support and redesign efforts. However, there are still important technical challenges to be addressed, including signal blurring problems for several of the Advanced Baseline Imager’s infrared channels and Geostationary Lightning Mapper emissions that are exceeding specifications. The project office is monitoring these problems and has plans in place to address them. The current development efforts and recent challenges for components of the flight project are described in table 5. The Ground Project Office has made progress in completing the ground system’s preliminary design. However, in doing so it experienced problems in defining ground system software requirements and identified problems with its dependencies on flight project schedules. Specifically, the project office discovered in early 2011 that software design requirements had not progressed enough to conduct the ground system’s preliminary design review. In addition, the ground system’s development schedule included software deliveries from flight project instruments that were not properly integrated—they had not yet been defined or could not be met. To address these problems, the Ground Project Office made significant revisions to the Core Ground System’s baseline development plan and schedule. In order to avoid potential slippages to GOES-R’s launch date, project officials decided to switch from plans to deliver software capabilities at major software releases to an approach where software capabilities could be delivered incrementally (prior to major releases) as the project received data inputs from the flight project. According to NOAA and program officials, the revised development approach is to provide flexibility in the ground system’s development schedule and reduce risk associated with the original waterfall schedule. However, the revised plan is expected to cost $85 million more than the original plan through the Core Ground System’s completion. This cost includes increased contractor and government staff, new oversight tools, and more verification and testing activities associated with an increased number of software deliveries. Program officials acknowledged that the revised plan intends to expend additional resources to reduce schedule risk and potential impacts to GOES-R’s launch date, but that it also introduces new cost and schedule risks associated with incremental development, such as more software development and verification activities that require additional government oversight and continuous monitoring. In addition, the program has cancelled previously exercised options to the ground project that were once considered part of its original baseline. In early 2011, the program determined that it could no longer fund Core Ground System contract options—which were estimated to cost approximately $50 million. Program officials stated that they cancelled the contract options due to approaching development commitments, including revisions to the ground system’s revised plan and schedule, and funding reductions from fiscal year 2011. According to program officials, the work to be performed under the cancelled contract options could be addressed by NOAA after GOES-R satellites are launched; however, there are currently no plans in place to do so. Table 6 describes the development efforts and recent challenges for ground project components. The GOES-R program’s estimated costs include, among other things, actual and estimated contract costs associated with design, development, integration, and testing activities for the instruments, spacecraft, and ground system as well as procurement of the satellites’ launch vehicles. The estimated costs also include government costs such as NOAA and NASA program management support and contingency reserves to be applied towards critical risks and issues as they arise. As of January 2010, the program reported estimates of $3.3 billion for the flight project (including reserves of $598 million), $1.7 billion for the ground project (including reserves of $431 million), $2.0 billion for other program costs (including reserves of $617 million), and $748 million for operations and maintenance. Although NOAA has not changed its program cost estimates for the development of GOES-R and GOES-S, contract costs for the instruments, spacecraft, and ground system are rising. Specifically, contractor estimated costs for flight and ground project components grew by $757 million, or 32 percent, between January 2010 and January 2012. Table 7 identifies growth in estimated contract costs for major program components. Not only have development contracts experienced rising costs since January 2010, but they have also experienced larger cost increases more recently. For example, between January 2011 and January 2012, contractors’ estimated costs increased by $184 million for the Core Ground System, compared with $88 million for the period between January 2010 and January 2011. Also, between January 2011 and January 2012, contractors’ estimated costs for the spacecraft and the Advanced Baseline Imager increased by $119 million and $91 million compared with $32 million and $57 million, respectively, between January 2010 and January 2011. Figure 4 depicts recent growth in estimated costs for these development components. The recent growth in contract costs is due in part to the additional labor and engineering support needed to address technical and programmatic problems experienced by flight and ground project components, including the technical complexity associated with development of the Advanced Baseline Imager and the spacecraft, and additional costs associated with the Core Ground System’s revised development plan. NOAA stated that some of the cost growth is attributed to scope changes, including instrument options that were exercised in 2010 and 2011. Based on our analysis, approximately $60 million of the $757 million growth in contractors’ estimated costs at completion from January 2010 through November 2011 was due to scope changes associated with new instrument flight model development. Given the recent increases in contract costs, the program plans to determine how to cover these increased costs by reducing resources applied to other areas of program development and support, delaying scheduled work, or absorbing additional life cycle costs. A contingency reserve (also called management reserve) is important because it provides program managers ready access to funding in order to resolve problems as they occur and may be necessary to cover increased costs resulting from unexpected design complexity, incomplete requirements, or other uncertainties. NOAA requires the program office and flight and ground projects to maintain a reserve of funds until their development is completed. Specifically, the flight project is to maintain 20 percent of planned remaining development costs as reserve, the ground project is to maintain 30 percent of planned remaining development costs as reserve, and the program office is to maintain 10 percent of planned remaining development costs as reserve. The program has allocated a proportion of its budget as reserves to mitigate risks and manage problems as they surface during development. As a result of changes in budget reserve allocations and reserve commitments, the program’s reserves have declined in recent years. Between January 2009 and January 2012, the program reported that its reserves fell from 42 percent of remaining development costs to 29 percent. Over the same period, the program reports that after accounting for changes in reserve budgets and reserve commitments, reserves fell from $1.7 billion to $1.2 billion, an approximately 30 percent reduction in its uncommitted reserves. This is important to note since about two-thirds of the development remains for the program’s two most expensive components—the spacecraft and the Core Ground System. Recent utilization of the program’s reserves included addressing unanticipated problems with instrument and spacecraft design, using more than expected labor required to accomplish program and project-related milestones, and acquiring additional resources required to execute the revised development plan and schedule for the Core Ground System. In this regard, the program’s independent review board recently raised questions about the sufficiency of the program’s near-term remaining reserves, and program officials decided to establish contractual funding caps for the spacecraft, three instruments—the Advanced Baseline Imager, the Solar Ultraviolet Imager, and the Geostationary Lightning Mapper—and the Core Ground System for fiscal year 2012, delaying work into future years. Figure 5 depicts changes in program reserves over time. At completion of the GOES-R program’s preliminary design review in February 2012, the program reported that it was within its thresholds because it had maintained 20 percent of planned remaining development costs as reserve for the flight project, 32 percent of planned remaining development costs for the ground project, and 10 percent of planned remaining development costs for the program office. The program will soon enter the integration and test phase, when projects typically experience cost and schedule growth and need additional funding—as identified in our prior reviews of NASA acquisitions. The program may need to draw from its available remaining reserves to address a number of situations, including unanticipated problems during completion of the critical design reviews of the spacecraft and the ground system; unanticipated problems during the integration and test of the instruments and spacecraft, such as required redesign; potential additional labor required for ground system software potential delays in the readiness of NOAA’s Satellite Operations Facility for ground system testing, resulting in the use of contractor facilities; and higher than expected costs for launch vehicles. While the program reported that reserves were within accepted levels as of February 2012, the reserves may not be matched to remaining development. Although the program restored two satellites to its budget baseline in February 2011, thereby adding approximately $3.2 billion to its total budget, it did not correspondingly change its program reserves. The program did not report its rationale for maintaining reserves at the two-satellite program or explain how these planned reserves were intended to cover risks associated with the development of all four satellites. As a result, there is limited assurance that the reserves are appropriate for each satellite’s remaining development. Whether the program will continue to stay within its budget depends in part on whether officials have a full understanding of the reserves required for remaining development. Given the program’s recent use of reserves and the significant portions of development remaining for major components, a complete understanding and proper management of reserve levels will be critical to successfully completing all program components. Unless NOAA assesses the reserve allocations across all of the program’s development efforts, it may not be able to ensure that its reserves will cover ongoing challenges as well as unexpected problems for the remaining development of all four satellites in the series. The success in management of a large-scale program depends in part on having an integrated and reliable schedule that defines, among other things, when work activities and milestone events will occur, how long they will take, and how they are related to one another. Without such a schedule, program milestones may slip. While the GOES-R program has adopted certain scheduling best practices at both the programwide and contractor levels, unresolved weaknesses also exist, some of which have contributed to current program milestone delays and a replanning of the Core Ground System’s schedule. Without a proper understanding of current program status that a reliable schedule provides, managing the risks of the GOES-R program becomes more difficult and may result in potential delays in GOES-R’s launch date. Program schedules not only provide a road map for systematic program execution, but also provide the means by which to gauge progress, identify and address potential problems, and promote accountability. Accordingly, a schedule helps ensure that all stakeholders understand both the dates for major milestones and the activities that drive the schedule. If changes occur within a program, the schedule helps decision makers analyze how those changes affect the program. The reliability of the schedule will determine the credibility of the program’s forecasted dates, which are used for decision making. Our work has identified nine best practices associated with developing and maintaining a reliable schedule. These are (1) capturing all activities, (2) sequencing all activities, (3) assigning resources to all activities, (4) establishing the duration of all activities, (5) integrating schedule activities horizontally and vertically, (6) establishing the critical path for all activities, (7) identifying reasonable “float” between activities, (8) conducting a schedule risk analysis, and (9) updating the schedule using logic and durations. See table 8 for a description of each of these best practices. The first seven practices are essential elements for creation of an integrated schedule. An integrated schedule that contains all the detailed tasks necessary to ensure program execution is called an integrated master schedule (IMS). The reliability of an integrated schedule depends in part on the reliability of its subordinate schedules. Automated integration of all activities into a single master schedule can prevent inadvertent errors when entering data or transferring them from one file to another. GOES-R has an IMS that is created manually once a month directly from at least nine subordinate contractor schedules. Due to anticipated limitations related to the number of activities that would need to be included, program officials stated that they did not intend to make the IMS a live, integrated file. We believe that the lack of a dynamic IMS is an acceptable schedule limitation for an organization of GOES-R’s size and complexity. Program officials also stated that they are in the process of creating an automated process for updating their IMS from contractor- delivered files sometime in 2012, but this was not available in time for our analysis. To assess the reliability of the programwide IMS, we analyzed four subordinate contractor schedules from July 2011 practices that were met in multiple schedules. For example, similar to the program-level IMS, three of the contractor IMS’s include all activities that were supplemented with monthly data received directly from the schedules of their subcontractors, and the fourth—the Core Ground System schedule—included milestone activities for all subcontractors. All four schedules also substantially or fully met the best practice for regularly updating the schedule, including regular reporting of status and keeping the number of activities completed out of sequence to a minimum. Finally, three of the four schedules substantially met the best practice for establishing accurate activity durations. Our rationale for the selection and analysis of schedules is discussed in app. I. provide information on schedule risk analyses conducted with risk simulations. A full set of analysis results is listed in table 9. Selected strengths and weaknesses for each of the four schedules follow the table. Of the four component schedules, the Geostationary Lightning Mapper schedule demonstrated the most comprehensive implementation of best practices. For instance, all activities and durations were appropriately captured in the schedule; logic links were included for over 99 percent of activities; only one small gap was present in the critical path from the date of the schedule through the end of the project; a high percentage of activities had appropriate logic; and the schedule was updated using logic and durations to determine dates. However, more than 10 percent of the resources listed in the schedule were overloaded, meaning that the schedule required more resources than were available. Also, changes in activity durations for one flight model did not result in a corresponding change in the shipment date of that flight model. The Advanced Baseline Imager schedule substantially or fully met three of the best practices. As mentioned above, the Advanced Baseline Imager schedule met the best practices for including all subcontractor records as well as regular updating and reporting on the schedule. In addition, it provided a substantial amount of information regarding the resources in its schedule; the contractor created a series of over 900 named codes that denote detailed information such as resource type, location, and labor rate. However, 18 percent of remaining activities had incomplete or missing logic, and 43 percent had soft date constraints. If the schedule is missing logic links between activities, float estimates will not be accurate. Incorrect total float values may in turn result in an invalid critical path and an inaccurate assessment of project completion dates. In the case of the Advanced Baseline Imager, many detailed activities had large total float values throughout. Moreover, the Advanced Baseline Imager schedule did not have a valid critical path for one of its three flight models and did not have a critical path that spans the entire program. The spacecraft schedule substantially met more than half of the best practices. It had a valid critical path through the launch dates for the first two satellites, included and mapped all activities appropriately, included logic links for over 99 percent of its activities, and was appropriately updated. However, more than 10 percent of its activities had constraints, and nearly 10 percent of its activities had gaps between consecutive activities, also known as lags. Lags should be minimal, and should not be used in place of activities, as they cannot be easily monitored, cannot be included in a risk assessment, and do not take resources. For this reason, lags should be eliminated and replaced with activities so they can be tracked. Also, the spacecraft schedule’s critical path included a year-long current activity for which a detailed breakdown was not available, even though the activity is in the current detailed planning period. The Core Ground System schedule contained weaknesses across seven of the nine best practice areas resulting in a score of partially or minimally met. For instance, a valid critical path could not be traced between the schedule’s latest status date and the launch date for either of the first two satellites; 12 percent of remaining activities had incomplete or missing logic and 13 percent had soft date constraints; and more than 75 percent of activities had more than 100 days of total float. Also, not all subcontractor detail activities were included in the schedule. Without accounting for all necessary activities, it is uncertain whether all activities are scheduled in the correct order, missing activities would appear on the critical path, or a schedule risk analysis accounts for all risk. Officials for all four contract teams suggested that certain schedule weaknesses were unique to the July 2011 schedules they provided and that the weaknesses would be remedied in December 2011 schedules. Our subsequent analysis did find improvements for each of the four contractors. For example, the Advanced Baseline Imager’s December schedule had approximately 10 percent fewer activities missing predecessors and successors than in July. The Core Ground System schedule also performed better in three of the nine best practices, including the presence of a full set of duration and total float information, and better information on handoffs with external parties. However, many weaknesses from the July schedules remained in the December schedules. For example, officials stated that several schedule risk analyses had been conducted for the Core Ground System schedule, but also reported that the schedule risk analysis conducted by the Core Ground System’s subcontractor was not valid and did not provide accurate or constructive information. Of particular importance is the absence of a valid critical path throughout all the schedules. Establishing a valid program-level critical path depends on the resolution of issues with the respective critical paths for the spacecraft and Core Ground System components. Contract specifications for all four contractors require that these schedules define critical paths for their activities. Without a valid critical path, management cannot determine which delayed tasks will have detrimental effects on the project finish date. It also cannot determine if total float within the schedule can be used to mitigate critical tasks by reallocating resources from tasks that can be delayed without launch date impact. Unless the weaknesses in these subordinate schedules are addressed, including the generation of valid critical paths in all schedules, the programwide IMS that is derived from them may not be sufficiently reliable. Weaknesses in implementing scheduling best practices undermine the program’s ability to produce credible dates for planned milestones and events, as illustrated by schedule discrepancies that occurred between the ground project and the flight project and the subsequent replanning that was required. The program has already demonstrated a pattern of milestone delays during its development due in part to the scheduling weaknesses we identified. Although the program has initiated two key efforts that could address certain schedule weaknesses, other weaknesses have not yet been resolved. Until the program’s scheduling weaknesses are corrected, it may experience additional delays to its key milestones. The program has revised planned milestone and completion dates for each of the instruments as well as the spacecraft and ground system components by at least 3 months—and up to 2 years—since the program originally estimated dates for key milestones in its December 2007 management control plan. Program officials noted that its December 2007 dates were notional estimates until integrated baseline reviews could be conducted. However, delays occurred both before and after the schedules for the instruments, spacecraft, and ground system components were formalized. In certain cases, more recent changes were due to delays in building and testing satellite components. For example, the Solar Ultraviolet Imager experienced delays in 2011 and 2012 due in part to delays in software development and in procuring flight model parts. Also, a failure in testing power supply boards caused rework delays in 2011 and 2012 on the Geostationary Lightning Mapper instrument. See figure 6 for a summary of changes in planned completion dates for components of the flight and ground projects. The potential for delays remains as GOES-R’s instruments, spacecraft, and ground system components complete their design and testing phases. According to program officials, the Geostationary Lightning Mapper shipment date remains at risk of a potential slip due to redesign efforts that have impacted the release of the build of the electronic board component of the instrument. The current projected delivery for this instrument is August 2013, leaving only 1 month before it is on the critical path for GOES-R’s launch readiness date. Moreover, weaknesses in implementing schedule best practices make meaningful measurement and oversight of program status and progress, as well as accountability for results, difficult to achieve—which can in turn reduce the timeliness and effectiveness of the understanding and mitigation of project risks. The program office has taken specific positive actions that address two of the scheduling weaknesses we identified. First, the GOES-R program implemented the Giver-Receiver Intersegment Database, a tool that tracks deliverables between the flight and ground projects. Giver-Receiver Intersegment Database items are formally reviewed by various working groups weekly and monthly before they are incorporated into the GOES-R IMS. This initiative is intended to address a program-recognized need for better horizontal integration (related to best practice 5). Second, the GOES- R program implemented a Joint Cost and Schedule Confidence Level, a set of parametric models designed to identify the probability that a given program’s schedule values will be equal to or less than target values on a specific date. In the Joint Cost and Schedule Confidence Level, simulations are run for the expected duration of activities based on probabilities supplied by officials in the project’s cost and schedule division. This initiative is intended to address a program-recognized need to conduct a schedule risk analysis (related to best practice 8). However, the possibility of future milestone delays remains. Initial results from the Joint Cost and Schedule Confidence Level from January 2012 indicated that there is a 48 percent confidence level that the program will meet its current launch readiness date of October 2015. Program officials plan to consult with the NOAA Program Management Council to determine the advisability of moving the launch readiness date to a 70 percent confidence level of February 2016. Given that scheduling weaknesses remain unaddressed, even these confidence levels may be unreliable. Establishing accurate confidence estimates depends on reliable data that result from the implementation of a full set of scheduling best practices. Furthermore, delays in GOES-R’s launch date could impact current operational GOES continuity and could produce milestone delays for subsequent satellites in the series. Program documentation indicates that with the current launch readiness date of October 2015, plus an on-orbit testing period, there is a 37 percent chance of a gap in the availability of two operational GOES-series satellites at any one time, assuming a normal lifespan for the satellites currently on-orbit. Any delays in the launch readiness date for GOES-R, which is already at risk due to the increased cost growth and recent heavy use of program reserves discussed previously, would further increase the probability of a gap in satellite continuity. This could result in the need for NOAA to rely on older satellites that are not fully functional. In addition, GOES-R’s schedule reserve is being counted on to complete activities for GOES-S. As a result, delays to certain program schedule targets could impact milestone commitments for GOES-S. Until the program implements a full set of schedule best practices, and uses it on succeeding schedule updates throughout the life of the program, further delays in the program’s launch dates may occur. In particular, without ensuring that all contractor and subcontractor information is included in the IMS and conducting regular schedule risk assessments, program management may not have timely and relevant information at its disposal for decision making. Lack of the proper understanding of current program status due to schedules that are not fully reliable undercuts the ability of the program office to manage a high- risk program like GOES-R. Risk management is a continuous process to identify potential problems before they occur. By identifying potential problems early, activities can be planned and invoked as needed across the life of a project to avoid or mitigate the adverse impacts of program problems. Effective risk management involves early and aggressive risk identification through the collaboration and involvement of relevant stakeholders. Government and industry risk management guidance divides risk management activities into four key areas—preparing for risk management, identifying and analyzing risks, mitigating risks, and reviewing risks with executive oversight. Table 10 describes recognized best practices in these areas. NOAA has established policies and procedures for effective risk management for GOES-R. For example, the program has documented a strategy for managing risks that includes important elements, such as relevant stakeholders and their responsibilities and the criteria for evaluating, categorizing, and prioritizing risks. The program’s approved risk management plan also includes requirements for risk mitigation— such as required actions, deadlines, and assigned risk owners—as well as requirements that risks’ status and changes are periodically reviewed by appropriate managers, including senior NOAA and NASA managers. Table 11 compares GOES-R’s risk management policies and procedures with recognized risk management practices for four areas. With such policies and procedures in place, NOAA has established a comprehensive framework to support its identification, mitigation, and oversight of risks across the program, and laid a foundation for consistent implementation. While the program has a well-defined risk management process, this process has not been fully implemented. Table 12 identifies the extent to which the program has implemented recognized risk management practices through its risk process. Of particular note, the program has not conducted and documented adequate or timely evaluations for all potential risks—called candidate risks in the risk management plan—in its risk list. In addition, the program did not always document its risk handling strategies and time frames and did not always provide adequate rationale for its decision to close risks. The GOES-R Program Office has identified 11 risks that it considers critical (medium- or high-level risks) and could significantly impact the program’s development cost or schedule, documented mitigation approaches for each risk, and tracked mitigation progress. For these and its other risks, the program generally has mitigation plans in place that typically include actions to be taken, deadlines, and assigned responsibilities that could help to minimize or control the occurrence of these critical risks. Table 12 describes the program-identified critical risks, as of February 2012. While the program has documented mitigation plans in place for most of its critical risks, the program is not mitigating its most critical risk— program funding stability. Although the program has included this risk on its top risk list and presented it to the NOAA Program Management Council, it has not devised options for required replanning or functionality descopes should the program experience reduced funding. Program officials stated that the risk is external and beyond their control. However, given that the program has made trade-off decisions regarding available funding, functionality, and the timing of its work, it is reasonable to expect the program to have plans in place that include possible trade-off decisions based on different outcomes, including triggers for when decisions need to be executed. Further, there is at least one critical risk that is not on the program’s top risk list that could further jeopardize the program’s launch readiness dates and life cycle costs: GOES-S milestones may be affected by delays that have occurred or may still occur during GOES-R development. As discussed earlier in this report, further delays in the development of the first satellite could result in problems for the second satellite’s scheduled milestones since the program is planning to complete the second satellite’s activities during periods of time set aside as reserve to complete the first satellite’s activities. Program officials stated that a risk has recently been added to the flight project’s risk list to reflect the potential delays to the GOES-S development schedule. Until this risk is added to the program’s risk list with a documented mitigation approach and regular monitoring, NOAA could delay the analysis, planning, and actions that would limit the impacts and occurrence of the risk, and would thus be unprepared to face the significant consequences to the program should this risk be realized. While the program has well-defined policies and procedures, it has not fully implemented its risk process. Fully implementing the recognized risk management practices defined in GOES-R policies and procedures would provide program officials with the assurance that all risks—including those that are new and most critical—are adequately addressed. NOAA has made progress toward achieving GOES-R program goals by completing certain preliminary and critical design milestones for its flight and ground projects. However, associated reviews, including the programwide critical design review planned for August 2012, have been delayed by many months. Progress was also accompanied by technical and programmatic challenges, such as initial failures of instrument components and replanning of ground project software releases. Although these two specific challenges are being addressed, others such as instrument signal problems and a 2-year cost growth of more than 30 percent for program development contracts point to troubling patterns that will require ongoing remediation and monitoring. NOAA has allocated budget reserves for such situations according to its established guidelines and reports that project reserve levels are currently within those guidelines. However, the program has used approximately 30 percent of its reserves over the last 3 years and significant portions of development remain for major components—including the spacecraft and Core Ground System. In addition, the program did not change its reserves when it restored two satellites adding approximately $3.2 billion to the program’s baseline. Unless NOAA assesses the reserve allocations across all of the program’s development efforts, it may not be able to ensure that its reserves will cover ongoing challenges as well as unexpected problems for the remaining development of all four satellites in the series. The unreliability of the program’s integrated master schedule and some contractor schedules adds further uncertainty to whether the program will meet its commitments. The issues that exist among these schedules, such as a lack of a full and consistent allocation of resources, incomplete logic, and gaps in the critical path to project completion, are inconsistent with our and agency best practices. NOAA has taken steps to improve schedule reliability through more automated schedule integration, a cross-project deliverable-tracking database, and its first schedule risk assessment. However, unless the program addresses the full set of scheduling weaknesses we identified, its schedules may not provide a fully reliable basis for decision making. NOAA has defined the policies and procedures it needs to effectively manage and mitigate these and other program risks. Nevertheless, officials do not have a current and comprehensive view of program risk because these policies and procedures have not been fully implemented. Most significantly, not all known candidate risks have been evaluated; corrective actions have not been consistently tracked; and risks we identified in this review are not being tracked or adequately mitigated. Until program officials diligently execute all of the program’s defined risk management practices and integrate these with improvements in the management of reserves and schedules, the program is at risk of exceeding cost and schedule targets and further slipping launch dates for satellites in the GOES-R series. To improve NOAA’s ability to execute GOES-R’s remaining planned development with appropriate reserves, improve the reliability of its schedules, and address identified program risks, we recommend that the Secretary of Commerce direct the NOAA Administrator to ensure that the following four actions are taken: Assess and report to the NOAA Program Management Council the reserves needed for completing remaining development for each satellite in the series. For all satellites in the GOES-R program, including those for which detailed scheduling has yet to begin, address shortfalls in the schedule management practices we identified, including but not limited to incorporating appropriate schedule logic, eliminating unnecessary constraints, creating a realistic allocation of resources, ensuring an unbroken critical path from the current date to the final satellite launch, and ensuring that all subcontractor activities are incorporated in contractors’ integrated master schedules. Execute the program’s risk management policies and procedures to provide more timely and adequate evaluations and reviews of newly identified risks, documented handling strategies for all ongoing and newly identified risks in the risk register, time frames for when risk mitigation and fallback plans are to be executed, adequate rationale for decisions to close risks, and documentation and tracking of action items from risk review board meetings or other meetings with senior NOAA and NASA managers through completion. Given the potential impact to the program, add the risk that GOES-S milestones may be affected by GOES-R development to the program’s critical risk list, and ensure that this risk and the program- identified funding stability risk are adequately monitored and mitigated. We received written comments on a draft of this report from the Secretary of Commerce, who transmitted NOAA’s comments. The department concurred with three of our recommendations and partially concurred with one recommendation. It also provided general comments, which are addressed below, and technical comments, which we have incorporated into our report, as appropriate. A copy of NOAA’s comments is provided in appendix II. NOAA concurred with our first recommendation to assess and report on the reserves needed for completing the remaining development for each satellite in the series to the agency’s Program Management Council. It stated that the GOES-R program would continue to provide status reports on contingency reserves to the Goddard Space Flight Center and the Program Management Council, and would work with NOAA to ensure contingency reporting meets its needs. It also stated in its general comments that contingency reserve for GOES-T and GOES-U (amounting to 20 percent of their projected development costs) was included when the $3.2 billion for these satellites was added to the program budget baseline in February 2011. NOAA did not provide any additional data on its fiscal year 2012 contingency budget allocations by satellite to support this statement and, as we discussed in this report, the program did not report a significant change in overall program reserve levels when it revised the baseline from two satellites to four satellites. NOAA concurred with our second recommendation to address shortfalls that we identified in schedule management practices and stated that the program will continue to bring down the number of errors in the schedules and improve the fidelity of the program’s integrated master schedule. NOAA partially concurred with our third recommendation to fully execute the program’s risk management policies and procedures, to include timely review and disposition of candidate risks. NOAA stated that it did not consider the “concerns” listed in its risk database to be risks or candidate risks, and that the risk management board actively determines whether recorded concerns should be elevated to a risk. However, the program has not treated concerns in accordance with its risk management plan, which considers these to be “candidate risks” and requires their timely review and disposition, as evidenced by the many concerns in the database that were more than 3 months old and had not been assessed or dispositioned. Unless NOAA follows its risk management plan by promptly evaluating “concerns,” it cannot ensure that it is adequately managing the full set of risks that could impact the program. NOAA concurred with our fourth recommendation to add to the program’s critical risk list that GOES-S milestones may be affected by GOES-R development and to ensure that this risk and the program-identified funding stability risk are adequately monitored and mitigated. NOAA stated that it currently has identified and reported on the GOES-S milestone risk and that the program is working with NOAA to monitor and mitigate the funding stability risk. In a general comment on our discussion of contractors’ estimated cost growth, NOAA stated that certain growth in contract costs was due to scope changes, including instrument options that were exercised in 2010 and 2011, and stated that such changes should be distinguished from cost growth due to efforts to resolve problems. We consider scope changes to be a valid component of program cost growth. Nevertheless, we revised our report to state that approximately $60 million of the $757 million growth in contractors’ estimated costs at completion from January 2010 through November 2011 was due to scope changes associated with new instrument flight model development. If these scope changes are excluded, contractors’ estimated costs for flight and ground project components still grew by approximately $700 million, or approximately 30 percent, between January 2010 and January 2012. NOAA commented that the GOES-R launch readiness date changed from December 2014 to October 2015 due to a protest against the spacecraft contract award. We acknowledged the reasons for delays in GOES-R’s launch readiness date in our draft report submitted to NOAA and in our prior reports on this program, and have added language to this report that relates to the bid protest and NASA’s response in delaying the award. NOAA also commented that the revised incremental schedule was less risky than its original waterfall schedule. While we acknowledged the reduced risk in our draft report and have further revised our report to reflect NOAA’s comments on our draft, we also note that the revised development methodology introduced other risks to the program—such as additional contractor staff and software development and verification activities that require government oversight and continuous monitoring. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 1 day from the report date. At that time, we will send copies to interested congressional committees, the Secretary of Commerce, the Administrator of NASA, the Director of the Office of Management and Budget, and other interested parties. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to (1) assess the National Oceanic and Atmospheric Administration’s (NOAA) progress in developing the Geostationary Operational Environmental Satellite-R series (GOES-R) program, (2) evaluate whether the agency has a reliable schedule for executing the program, and (3) determine whether the program is applying best practices in managing and mitigating its risks. To assess progress in developing the GOES-R satellite program, we compared the program’s planned completion dates for key milestones identified in its management control plan and system review plan against actual and current estimated completion dates of milestones. We analyzed program monthly status briefings to identify the current status and recent development challenges of flight and ground project components and instruments. We also analyzed contractor- and program- reported data of development costs and reserves. Finally, we conducted several interviews with GOES-R program staff to better understand milestone time frames, to discuss current status and recent development challenges for work currently being performed on GOES-R, and to understand how the program reports costs and reserve totals. To evaluate whether NOAA has a reliable schedule for executing the program, we evaluated contractor schedules to determine the extent to which GOES-R is following our identified best practices in creating and maintaining its schedules. We analyzed four contractor schedules—two of which represent the overall schedules for the flight and ground projects (spacecraft and Core Ground System), the program’s critical flight instrument (Advanced Baseline Imager), and an instrument that had been experiencing implementation issues at the time of our review (Geostationary Lightning Mapper). We also populated workbooks as a part of that analysis to highlight potential areas of strengths and weakness in schedule logic, use of resources, task duration, float, and task completion; analyzed programwide initiatives undertaken by GOES- R such as the Joint Cost and Schedule Confidence Level and the Giver- Receiver Intersegment Database; assessed GOES-R’s progress against their own scheduling requirements, and interviewed government and contractor officials regarding their scheduling practices. To determine whether the GOES-R program is applying best practices in managing and mitigating its risks, we analyzed the program’s risk management plan to identify the program’s policies and procedures and compared these to outputs from the program’s risk register and other program risk management documentation, such as risk mitigation status reports. We also reviewed documents such as monthly briefings to the NOAA Program Management Council and program risk status reports to determine how individual risks are managed and reviewed on a regular basis. We assessed the extent to which the program’s policies and procedures met government and industry recognized risk management practices from the Software Engineering Institute’s Capability Maturity Model® Integration for Acquisition (Version 1.3), the International Organization for Standardization, the Defense Acquisition University, and GAO, and whether the program fully implemented its policies and procedures. We also examined the program’s risk status reports and risk register to identify risks for which the program did not have an adequate mitigation plan or were not being actively monitored by the program. Finally, we interviewed GOES-R officials to gain further insight into the program’s risk management process and the program’s application of its process. We primarily performed our work at NOAA and National Aeronautics and Space Administration offices in the Washington, D.C., metropolitan area. We conducted this performance audit from August 2011 to June 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, individuals making contributions to this report included Colleen Phillips (assistant director), Paula Moore (assistant director), Shaun Byrnes, Nancy Glover, Franklin Jackson, Jason Lee, and Josh Leiling.
The GOES-R series is a set of four satellites intended to replace existing weather satellites that will likely reach the end of their useful lives in about 2015. NOAA estimates the series to cost $10.9 billion through 2036. Because the transition to the series is critical to the nation’s ability to maintain the continuity of data required for weather forecasting, GAO reviewed NOAA’s management of the GOES-R program. Specifically, GAO was asked to (1) assess NOAA’s progress in developing the GOES-R satellite program, (2) evaluate whether the agency has a reliable schedule for executing the program, and (3) determine whether the program is applying best practices in managing and mitigating its risks. GAO analyzed program management, acquisition, and cost data; evaluated contractor and program-wide schedules against best practices; analyzed program documentation including risk management plans and procedures; and interviewed government and contractor staff regarding program progress and challenges. The Geostationary Operational Environmental Satellite-R series (GOES-R) program has made progress by completing its early design milestones and is nearing the end of the design phase for its spacecraft, instrument, and ground system components. While the program continues to make progress, recent technical problems with the instruments and spacecraft, as well as a significant modification to the ground project’s development plan, have delayed the completion of key reviews and led to increased complexity for the development of GOES-R. The technical and programmatic challenges experienced by the flight and ground projects have led to a 19-month delay in completing the program’s preliminary design review. Nevertheless, program officials report that its planned launch date of October 2015 for the first satellite has not changed. While the program reports that approximately $1.2 billion is currently in reserve to manage future delays and cost growth, significant portions of development remain for major components. As a result, the program may not be able to ensure that it has adequate resources to cover ongoing challenges as well as unexpected problems for the remaining development of all four satellites. The success in management of a large-scale program depends in part on having a reliable schedule that defines, among other things, when work activities and milestone events will occur, how long they will take, and how they are related to one another. To its credit, the program has adopted key scheduling best practices and has recognized certain scheduling weaknesses. It has also recently instituted initiatives to automate its integrated master schedule, correct integration problems among projects, and assess schedule confidence based on risk. However, unresolved schedule deficiencies remain in its integrated master schedule and the contractor schedules that support it, which have contributed to a re-plan of the schedule of the ground system and to the potential for delays to satellite launch dates. The program recently determined that the likelihood of the first satellite meeting its planned October 2015 launch date is 48 percent. Based on this planned launch date, the program reports that there is a 37 percent chance of a gap in the availability of two operational GOES-series satellites, which could result in the need for the National Oceanic and Atmospheric Administration (NOAA) to rely on older satellites that are not fully functional. Until its scheduling weaknesses are addressed, it will be more difficult for the program to know whether its planned remaining development is on schedule. NOAA has established policies and procedures that conform with recognized risk management best practices. For example, the program has documented a strategy for managing risks that includes important elements, such as relevant stakeholders and their responsibilities and the criteria for evaluating, categorizing, and prioritizing risks. However, while the program has a well- defined risk management process, it has not been fully implemented. For example, the program has not provided adequate or timely evaluations for potential risks, did not always provide adequate rationale for the decision to close a risk, and has at least two critical risks in need of additional attention. Until all defined risk management practices are diligently executed and critical risks adequately mitigated, the GOES-R program is at risk of exceeding cost and schedule targets, and launch dates could slip. GAO is making recommendations to NOAA to assess and report reserves needed over the life of the program, improve the reliability of its schedules, and address identified program risks. NOAA concurred or partially concurred with GAO’s recommendations.
NAFTA, which was agreed to by Canada, Mexico, and the United States in 1992 and implemented in the United States through legislation in 1993, contained a timetable for the phased removal of trade barriers for goods and services between the three countries. Beginning December 18, 1995, Mexican trucking companies were to have been able to apply for the authority to deliver and backhaul cargo between Mexico and the four U.S. border states. However, on that date the Secretary of Transportation announced an indeterminate delay because of safety and security concerns. NAFTA’s timetable calls for all limits on cross-border access (i.e., truck travel within the three countries) to be phased out by January 2000. Until expanded access is granted, trucks from Mexico continue to be limited to commercial zones along the border (generally, areas between 3 and 20 miles from U.S. border towns’ northern limits, depending on each town’s population). For several decades, the United States has been expanding inspection and enforcement programs nationwide to encourage safer U.S. trucks and truck operation. DOT has, among other things, (1) issued minimum safety standards for trucks and commercial drivers, (2) provided grants to states to develop and implement programs that would lead to the enforcement of these safety standards, and (3) conducted reviews of about one-third of all domestic interstate trucking companies in order to determine overall compliance with safety regulations. Through the Motor Carrier Safety Assistance Program (MCSAP), DOT works in partnership with states to enforce federal truck regulations. As the states adopt federal safety regulations, DOT provides financial assistance for enforcement. Although DOT maintains a presence in all states to promote truck safety and requires that states comply with minimum federal regulations and requirements related to truck safety, it relies on the states to develop their own strategies for enforcement. NAFTA also established the Land Transportation Standards Subcommittee to work toward compatible truck safety and operating standards among the countries. While U.S. and Canadian commercial trucking regulations are largely compatible, major differences existed between U.S. and Mexican regulations concerning drivers’ qualifications, the hours of service, drug and alcohol testing, the condition of vehicles (including their tires, brakes, parts, and accessories), accident monitoring, and the transport of hazardous materials. According to DOT, progress has been made in making truck safety and operating standards compatible, and discussions are still ongoing. NAFTA’s three member nations have accepted the truck inspection standards established by the Commercial Vehicle Safety Alliance (CVSA).For the most part, there are two types of inspections conducted according to the trilaterally accepted truck inspection guidelines—“level-1” and “level-2” inspections. The level-1 inspection is the most rigorous—a full inspection of both the driver and vehicle. The driver inspection includes ensuring that the driver has a valid commercial driver’s license, is medically qualified, and has an updated log showing the hours of service. The level-1 vehicle inspection includes a visual inspection of the tires and of the brakes’ air pressure, among other things, and an undercarriage inspection that covers the brakes, frame, and suspension (see fig. 1). The level-2 inspection, also known as a “walk-around inspection,” includes a driver inspection and a visual inspection of the vehicle. It does not include the careful undercarriage inspection. Trucks that fail inspections for serious safety violations are placed out of service—that is, they are halted until the needed repairs are made. From January 1996 (the first full month of detailed records of inspections) through December 1996 (the most recent month for which data were available as of March 1997), federal and state safety inspectors conducted over 25,000 safety inspections of about 3 million Mexican trucks crossing into the United States. These inspections resulted in an out-of-service rate of about 45 percent for serious safety violations. The monthly out-of-service rates ranged from 39 percent to 50 percent, with no consistent trend (see fig. 2). The average monthly out-of-service rate of 45 percent compares unfavorably with the 28-percent rate for 1.8 million U.S. trucks inspected on the nation’s roads during fiscal year 1995 (the most recent year for which nationwide data are available). However, because inspectors target for inspection vehicles and drivers that appear to have safety deficiencies, their selections are not random. As a result, the out-of-service rates may not necessarily reflect the general condition of all vehicles. In addition, while about half of the 1.8 million inspections of U.S. trucks were level-1 inspections, only slightly more than one-quarter of the inspections of trucks from Mexico were this type. Level-1 inspections are more stringent than level-2 inspections and result in higher out-of-service rates. Consequently, if more of the inspections of trucks from Mexico had been level-1 inspections, the resulting overall out-of-service rate likely would have been somewhat greater than 45 percent. The out-of-service rates for trucks entering the United States from Mexico have also been substantially greater than those for U.S. trucks operating within individual border states (see fig. 3). California’s data show less disparity, which may be because regular inspections since the late 1980s have made Mexican carriers traveling into California more knowledgeable about U.S. truck safety standards. Federal and state truck inspectors we contacted in Arizona, California, and Texas told us that trucks from Mexico are upgrading equipment to improve safety. In their opinion, trucks from Mexico are safer now than they were in late 1995. For example, the inspectors told us that they often find fewer violations per truck, and some previous violations (such as instances of drivers sitting on milk crates rather than secured seats) are now seldom seen. They credit the increased inspections at the border (discussed later in this report) with heightening Mexican carriers’ awareness of and willingness to comply with U.S. truck safety requirements. They commented that the inspections have helped bring about improvements with tires, brakes, and other equipment. Also, many Mexican drivers we spoke to were eager to learn about U.S. safety regulations so they could strive to meet them. Many U.S. and Mexican trucking industry and association officials we contacted said that the relatively high out-of-service rates for trucks from Mexico do not mean that Mexican truck operators will drive unsafe trucks into the United States once access to the remaining portions of the border states and to the United States as a whole is granted. They told us that most trucks currently operating and being inspected at border crossings are used exclusively for short-haul operations and tend to be older trucks that are more likely to have equipment problems leading to out-of-service violations. They believe that Mexican truck operators choosing to operate farther into the United States will use higher-quality trucks because doing so is in their interest. For instance, Mexican trucking companies would not want their trucks to break down or to be taken out of service far from their bases of operations, where repairs would be more difficult and costly, the officials explained. While this reasoning seems plausible, we were unable to obtain information that would confirm or refute it. Most trucks from Mexico enter the United States at 7 of the 23 crossing points for commercial trucks. To provide some assurance that the 12,000 trucks crossing from Mexico into the United States each day will be safe and operated safely, the three border states in our review and DOT have increased enforcement markedly at the major border locations. Although there are 23 locations where northbound trucks from Mexico may enter the United States, about 90 percent of the trucks enter at 7 major crossings—in California (Otay Mesa and Calexico), Arizona (Nogales), and Texas (El Paso, Laredo, McAllen, and Brownsville) (See fig. 4.) Trucks from Mexico enter the United States through the U.S. Customs Service’s ports of entry. Trucks passing through Customs then enter truck inspection facilities where such inspection facilities exist. At locations where separate permanent facilities do not exist, Customs has generally allowed state and federal truck inspectors to carry out their safety inspections on the agency’s property. Permanent facilities allow more rigorous truck inspections to take place, provide scales and measuring devices to screen all trucks for the violations of being overweight or oversize, provide cover to keep inspectors out of the extreme heat prevalent at the border, and signal to the trucking community a permanent commitment to enforcing truck safety standards. In the past year, California opened two permanent truck inspection facilities at its major border crossings, where it aims to inspect and certify the trucks entering the state from Mexico once every 3 months. Texas, with about two-thirds of the truck traffic from Mexico, and Arizona, with about 10 percent of the traffic, have no permanent truck inspection facilities at any of their border locations. Discussions within Texas and Arizona are under way regarding constructing at least one permanent facility in each state. As of January 1997, the three border states in our review had 93 truck inspectors stationed at border crossing locations (see table 1). In addition, DOT approved new temporary positions for 13 truck safety inspectors and, as of January 1997, had 11 of them working at the border. (The 13 positions are for a 2-year term only.) These federal truck inspectors took over for six DOT safety specialists who had been temporarily reassigned to inspect trucks from Mexico at border locations from December 1995 through August 1996. Customarily, DOT does not routinely conduct roadside inspections at fixed locations. Most state truck inspectors (83 of 93) have been stationed at the major border crossing locations. A year earlier, the three border states in our review had 39 inspectors assigned to the major border crossing locations (see table 2). In addition, DOT has assigned its inspectors to each state and then, with one exception, assigned them to the busiest locations within each state. There are relatively few federal inspectors, and their appointments are temporary, since, under MCSAP, states have the primary responsibility for developing enforcement strategies. California, with about 24 percent of truck traffic from Mexico, has the most rigorous border state truck inspection program and has been inspecting trucks from Mexico in its commercial zones for several years. In 1996, California opened permanent truck inspection facilities at its two major border locations—Otay Mesa and Calexico (see fig. 5). California constructed these facilities, which cost about $15 million each, with federal and state highway funds that had been earmarked by the state for roadway projects because it considered these facilities to be of a higher priority. California’s decision was made easier because land was available for purchase adjacent to Customs’ ports of entry. These facilities have been allocated a total of 47 full-time inspectors: Twenty-three are California Highway Patrol officers, and 24 are civilian truck inspectors. The use of civilian inspectors, for whom the pay and training costs are less, has helped boost California’s overall number of inspectors. The state inspectors are assisted by two federal inspectors. The state officials in charge of operations at these facilities told us that one of their objectives is to inspect and certify every truck from Mexico at least once every 90 days. Additionally, all trucks from Mexico are weighed and checked for proper size before traveling on U.S. roads. Currently, California has enough inspectors at its ports of entry that many of them spend their time on roads in border zones checking the safety of U.S. trucks operating in the area. With about 66 percent of all truck traffic from Mexico (more than 2 million truck crossings in fiscal year 1996) and four of the seven major border crossing locations, Texas continues to face the greatest enforcement burden. (Figure 6 shows aspects of the four Texas locations.) Texas’ situation has been more complicated because three of its major locations have had two or three bridges each, where trucks cross the Rio Grande into the United States. However, in mid 1996 Customs consolidated the truck traffic in McAllen, Texas, by closing one of the two bridges to northbound trucks. Such consolidation might be possible for other major locations in Texas. As of January 1997, Texas had no permanent truck inspection facilities at any of its 11 border locations. In Laredo, for example, inspectors work in an uncovered parking area in extreme heat and humidity for much of the year. State and federal officials have announced plans to retrofit some existing buildings to establish a truck inspection facility at Texas’ fourth busiest truck crossing location just outside of McAllen, although federal and state officials have not set a completion date for this project. According to state transportation officials, state truck enforcement officials, and transportation authorities in academia, four primary reasons have kept Texas from building truck inspection facilities at border locations: Key state agencies see NAFTA as a national issue and are reluctant to use state funds to enforce its provisions; most of the major border crossings are in urban areas (Laredo, El Paso, and Brownsville), where little space is available to accommodate truck inspection facilities that would be adjacent to border entry points; the state agency responsible for inspecting trucks, the Department of Public Safety, has traditionally worked (and prefers to work) in a roving fashion, conducting roadside truck inspections rather than working out of one location; and many Texas border cities have developed close economic and social relationships with their Mexican sister cities directly across the border and resist increased inspections if they perceive that a major crackdown on trucks could undermine such relationships. As of December 1995, Texas had 22 officers and troopers (inspectors) covering its 11 border locations, but about 2 years later, as of January 1997, Texas had increased this staffing by nearly 70 percent to 37. Traditionally, these inspectors spent only about 25 percent of their time actually inspecting trucks, but, according to state officials, in 1996 that percentage grew substantially. Eight of the 13 federal truck inspector positions have been allocated to Texas’ major border locations. Also, state truck inspectors in Texas have trained small cadres of local police officers in Brownsville, Laredo, and El Paso to check trucks and drivers periodically for safety. For example, according to an El Paso official, 29 city police officers were trained to perform truck inspections in November 1995, and, as of December 1996, those officers were performing inspections on U.S. and Mexican trucks 1 day out of every 2 weeks, on average. Arizona receives about 10 percent of the total truck traffic from Mexico—about 314,000 crossings in fiscal year 1996. Of the state’s six ports of entry, Nogales received the majority (about 72 percent) of these trucks. As of January 1997, Arizona had no permanent truck inspection facilities, but state officials were discussing whether to build one near the Nogales port of entry (see fig. 7). As of September 1996, two state inspectors were permanently stationed at the border—one in Nogales and one in San Luis. Recently passed state legislation, however, increased this number to nine in November 1996—seven near Nogales and two in San Luis. However, according to a state enforcement official, in early 1997 Customs withdrew its permission for state enforcement personnel to conduct their enforcement activities on the Nogales Customs lot. He told us that state inspectors no longer conduct inspections in the Customs lot and are now performing their enforcement activities away from the border. In addition, as of September 1996, there were two federal truck inspectors assigned to Nogales and one assigned to San Luis. A DOT official told us that the federal inspectors are still working out of the Nogales Customs lot and that DOT is trying to reach a formal agreement with Customs to allow both federal and state truck safety inspections at this location. DOT has developed a strategy to help implement NAFTA. This strategy entails measures to be taken in the border states and within Mexico to improve compliance with U.S. truck safety regulations, such as providing funding for state enforcement activities and educational campaigns on U.S. safety regulations directed at Mexican drivers and trucking companies. Opportunities exist for increasing the strategy’s effectiveness. These opportunities would involve (1) helping the border states establish results-oriented enforcement strategies for trucks entering the United States from Mexico and (2) working with other federal and state agencies so that the seven major border locations have at least minimum truck safety inspection facilities. These actions, if undertaken, would also help DOT better understand the degree to which U.S. safety regulations are being complied with as a prelude to opening all of the United States to commercial trucks from Mexico. According to DOT officials, the Department’s goals are to foster a “compliance mind set” among Mexican truck operators and to see a continuous improvement in adhering to U.S. truck safety standards. To meet these goals, DOT has a three-pronged strategy that consists of (1) cooperative federal and state enforcement of U.S. safety and operating standards, (2) the dissemination of information to ensure that Mexican truck operators have what they need to know to operate in the United States, and (3) the development of compatible safety and operating standards in all three NAFTA countries. Several of the specific initiatives under this strategy are developing a “safety assessment process” that the Mexican government can use to determine the extent to which Mexican operators (1) understand their obligations and the processes the United States uses in truck safety enforcement and (2) comply with U.S. requirements; providing more than $1 million annually since fiscal year 1995 in grants to the four border states to prepare for enforcement activities related to NAFTA, such as increasing the number of state inspectors stationed at the border; conducting educational campaigns on U.S. safety standards, including training seminars and leaflets, for Mexican drivers and truck companies; approving 13 DOT truck inspector positions for 2 years to demonstrate a federal commitment to truck safety; working with CVSA and state truck enforcement agencies to train inspectors in Mexico in an attempt to increase truck safety overall in that country; contracting with the International Association of Chiefs of Police to conduct a series of truck safety forums in the U.S. border states to allow U.S. and Mexican enforcement officials to discuss strategies and other truck safety issues of mutual concern; and participating with the Land Transportation Standards Subcommittee, established under NAFTA, to develop compatible safety and operating standards in all three NAFTA countries. These initiatives have had mixed results. For example, MCSAP funding for activities related to NAFTA has resulted in a greater inspection presence at the border; however, the inspector training initiative was less successful. In this regard, DOT officials believe that one of the keys to ensuring that trucks from Mexico are safe is to have Mexico improve its truck inspection program so that more trucks are inspected there before traveling into the United States. However, U.S. efforts to fortify Mexico’s inspection program encountered problems. Beginning in 1991, DOT provided about $278,000 to train Mexican truck inspectors. From 1993 to 1995, about 285 Mexican inspectors received the necessary 2-week certification course. However, the lead U.S. trainer characterized these efforts as unsuccessful, since, as of late 1996, only about 50 of these inspectors were still employed by the Mexican truck inspection agency, and no regular truck inspection activity ever took place in Mexico as a result of this training. DOT is now prepared to provide additional funding (about $96,000 left from the first training effort and more, if needed) for further truck inspector training in Mexico. To overcome one of the flaws of the first effort, which trained civilians who had limited authority to stop trucks along the roadside and issue citations, future training will be for Mexico’s Federal Highway Patrol officers, who will have the requisite authority (although truck inspections will not be their sole duty similar to state truck inspectors in the United States). According to DOT officials, Mexico’s Federal Highway Patrol is the most stable enforcement agency in Mexico and therefore should not be affected by any economic or political changes in Mexico. DOT, again working with CVSA, had targeted the fall of 1996 to begin the new training. This target was not met and DOT now expects the new training to begin in early 1997. DOT officials are negotiating with Mexican officials to be sure that Mexico provides assurances that the newly trained inspectors will be used to conduct inspections along the border. Because of the delays in the federal effort and in order to develop working relationships with their Mexican counterparts, both Arizona and Texas state officials have begun negotiating with Mexico’s Federal Highway Patrol officials in adjacent Mexican border states to begin their own training efforts in those states. DOT officials told us that the intent of the training is that Mexican inspectors will inspect northbound trucks, that is, those trucks entering the United States, and that the first vehicles to be inspected will be those of carriers that have applied for the authority to operate in the four U.S. border states. They added, however, that trucks belonging to these carriers will be inspected regardless of the trucks’ destinations—either to the United States or within Mexico. Even if Mexico establishes a truck inspection program, DOT’s expectation of having Mexican officials inspect northbound trucks before they arrive in the United States may not be fully realized. A high-level Mexican government official told us that the country’s emphasis in inspecting trucks will be on ones coming into Mexico rather than on northbound trucks leaving Mexico. Opportunities exist for DOT to work in partnership with the border states to develop performance-based, results-oriented enforcement strategies to, among other things, measure the progress being made by Mexican trucks in meeting U.S. safety regulations. These strategies, which would identify clearly what the states intend to accomplish, could be developed in cooperation with each border state considering the local conditions and resources available. Currently, under MCSAP, DOT sets broad national goals but allows states to define local problems, the approach to take in addressing them, and the resources to be employed. Our review of current MCSAP grant agreements with the border states (for both basic grants to carry out statewide enforcement plans and enforcement activities related to NAFTA) showed that while the states planned to use funds, in part, to increase their enforcement presence at the border, none of the grants specified the development of performance measures with goals for the results to be expected from truck safety inspections. As a result, as described earlier, DOT and others generally must rely on anecdotal and qualitative information. DOT has recognized the need to move toward performance-based goals for motor carrier safety. In February 1997 DOT announced that its program that provides grants for statewide safety enforcement activities will incorporate performance-based goals to increase truck and driver safety. Although funds for basic MCSAP grants will be distributed by formula, DOT plans to explore approaches to provide some form of incentive funding to states that meet national and state objectives for safety. DOT plans to implement this change in fiscal year 1998. Also, in March 1997, DOT submitted a legislative proposal, as part of the reauthorization of the Intermodal Surface Transportation Efficiency Act, that would incorporate this performance-based, results-oriented approach. California’s activities already include a results-oriented aspect: As described, the state has the goal of inspecting every truck from Mexico once during each 90-day period, though this is not specified by the state’s MCSAP grant. The strategy relies on providing CVSA inspection stickers for trucks passing level-1 inspections or correcting safety violations. A current inspection sticker means that a truck will not be subject to state or federal inspection, except in the case of an obvious equipment problem, for a 3-month period. On our recent trip to California’s truck inspection facilities at Otay Mesa and Calexico, we saw truck after truck crossing the scales of the inspection station with color-coded CVSA inspection stickers. Almost all the truck traffic we observed was repeat traffic, according to California inspection officials. It was easy to identify which trucks had been determined to be safe (those with current CVSA stickers), which trucks were due to be reinspected (those with outdated stickers), and which trucks had yet to be inspected (those without stickers). The majority of the truck traffic from Mexico at the five major border locations in Arizona and Texas is also of a repeat nature, according to state enforcement officials. In each of these states, enforcement officials told us that the state has the goal of signaling to Mexican carriers that it is serious in enforcing truck safety standards. Each state’s basic strategy to accomplish this goal is to increase the presence of state inspectors at major border locations to convince Mexican carriers to upgrade the safety of their trucks. However, Arizona and Texas have not established quantitative goals to help them measure the extent to which Mexican carriers are complying with U.S. safety regulations. In addition, since they conduct primarily level-2 truck inspections on the border, which cannot result in CVSA stickers, they have no way of identifying the trucks that have complied. As a result, the officials sometimes end up reinspecting recently inspected vehicles. A 1995 study conducted by the International Association of Chiefs of Police for DOT concluded that the lack of truck inspection facilities at the U.S.-Mexican border gives no assurance to interior states that trucks from Mexico will be screened for safety upon entering the United States. Furthermore, according to DOT, it does not have any discretionary funds available to the border states to build weight or inspection facilities. However, the states can use federal-aid highway funds apportioned to them for this purpose if they choose to do so. Historically, DOT has not taken an active role in planning with federal and state agencies to build or rehabilitate facilities whose functions might include truck safety enforcement. However, DOT has had opportunities to work with the General Services Administration (GSA) and the states to ensure that border facilities meet current and future needs for truck safety inspections. GSA has a process allowing all federal agencies that have a need to operate along the border to provide input during the preparations for new border stations. While DOT does not control this process, as an agency with a stake in safety enforcement at border crossing locations, it can choose to be an active participant. DOT has missed opportunities to ensure that the upgrading of U.S. Customs installations included space and facilities adjacent to or on Customs’ property for state and federal inspectors to perform truck safety inspections. For example, in 1995, DOT had the opportunity but did not participate in the coordinated federal effort to design a new Customs border crossing installation near McAllen, Texas. By not participating, DOT lost the opportunity to secure a truck inspection facility in the new installation. However, in late 1996, federal DOT officials in Texas did get involved in the planning phase for a proposed inspection facility, which envisioned renovating some unused Customs buildings at McAllen. Similarly, according to a GSA official, DOT indicated interest in having a portion of a new border crossing at Brownsville contain a protective canopy, scales, an area for vehicles transporting hazardous materials, and parking space for out-of-service vehicles (at a cost that GSA estimated at about $1 million). However, as of January 1997, when GSA was finalizing the design, DOT had not resumed discussions with the agency to provide input or commit funds for the project. As discussed earlier in this report, Arizona and Texas have not constructed truck inspection facilities. One reason given is money. Many state officials we spoke to believe that such facilities would cost as much as those in California and that the federal government should pay for them since NAFTA represents national interests. However, to achieve a marked improvement over the current conditions in Arizona and Texas, truck inspection facilities would not have to be on a scale with the $15 million facilities in California. Even facilities with minimal elements such as a scale, a canopy, an inspection pit, and a small office, would represent vast improvements over the current situations in Arizona and Texas, which involve working outdoors in difficult climatic conditions. According to GSA and California Department of Transportation officials, such a truck inspection facility could be built for between $1 million and $2 million, excluding land costs. In addition to securing funds, another significant challenge is the need for large spaces for truck inspection facilities. As pointed out by DOT’s September 1995 Best Practices Manual for Truck Inspection Facilities, a critical element is parking, where vehicles failing to comply with U.S. regulations can be detained and repaired. Three of Texas’ major border locations are in urban areas that lack space to park more than a few large trucks. While the Customs Service has generally allowed state and federal agencies to inspect trucks within its property, this may not always be the case, as the recent experience in Nogales shows. Since the available space at Customs facilities is limited, it is paramount in the long term that DOT be more involved in planning new additions to or replacements of major border installations. The March 1997 legislative proposal contains provisions for planning improvements within the trade corridor and at border crossings and establishing the Border Gateway Pilot Program. The proposal would authorize (1) planning funds for multistate and binational transportation and (2) funds for improvements to border crossings and approaches along the Mexican and Canadian borders. Under the proposal, funds provided for “border gateway” projects, such as constructing new inspection facilities, may be used as the nonfederal matching funds for other federal-aid highway funds, as long as the amount of the “border gateway” funds does not exceed 50 percent of a project’s total cost. A DOT official also told us that funds to help address these needs will be included in DOT’s fiscal year 1998 budget request. As of mid-March 1997, the full budget request had not been submitted to the Congress. DOT and the three border states in our review have acted to increase inspection activities at the border and in other ways to foster increased compliance with U.S. safety regulations by Mexican trucks. While Mexican trucks entering the United States continue to exhibit high out-of-service rates for serious safety violations, federal and state officials believe that their efforts have had a positive effect and that Mexican trucks are now safer than they were in 1995. However, there is no hard evidence on which to test this belief; much of the officials’ information is anecdotal. Compliance cannot be assessed at the border because results-oriented quantitative measures are not in place. We believe that DOT can improve commercial truck safety enforcement at the border by encouraging border states to set specific, measurable results-oriented enforcement strategies for truck inspections at border crossings and by assisting them in doing so. We recognize each state has unique circumstances and that implementing results-oriented strategies would require that more level-1 inspections be conducted. DOT’s move to performance-based, results-oriented MCSAP grants for statewide safety enforcement activities is a large step in the right direction. However, unless discrete performance-based, results-oriented measures are developed specifically for Mexican trucks entering the United States, DOT will still possess only anecdotal information on the extent to which trucks from Mexico meet U.S. safety regulations. As widespread concerns exist over whether trucks from Mexico comply with U.S. safety regulations, we believe that border-specific performance measures are needed. We also believe that DOT needs to be more proactive in securing inspection facilities at planned or existing border installations. We recognize there are various reasons why facilities do not exist at some border locations and that in some instances a lack of funding or space or other reasons may preclude adding these inspection facilities. But DOT’s leadership in promoting and securing more permanent inspection facilities is needed to achieve more effective truck safety inspections at the border. DOT has submitted a legislative proposal, and DOT officials have indicated that a budget proposal will be submitted that will, in part, allow states to address concerns about the border infrastructure and safety. However, the prospects for enactment are unknown. In the meantime, DOT needs to be more active in the planning process for border installations to ensure that truck safety inspection facilities are included, where practicable. First, to measure progress by Mexican commercial truck carriers in meeting U.S. safety regulations, we recommend that the Secretary encourage the border states to develop and implement measurable results-oriented goals for the inspection of commercial trucks entering the United States from Mexico and assist them in doing so. We also recommend that the Secretary work actively with GSA, as part of GSA’s existing planning process, to ensure that truck safety inspection facilities are included, where practicable, when border installations are planned, constructed, or refurbished. We provided DOT with a draft of this report for its review and comment. To receive comments on the draft report, we met with a number of officials, including a senior analyst in the Office of the Secretary and the special assistant to the associate administrator in DOT’s Office of Motor Carriers. They said that, overall, they were pleased with the report’s contents and that the report accurately characterized DOT’s activities and other activities at the border. They offered a number of technical and clarifying comments on the draft report, which we incorporated where appropriate. The officials did not comment on the draft report’s recommendations. To achieve our three objectives, we reviewed inspection reports and truck traffic data and visited 13 border crossings, where about 92 percent of the trucks from Mexico enter the United States. At these locations, we observed trucking facilities and federal and state truck inspection activity. We discussed our work with and received documents from DOT officials; state truck enforcement officials in Arizona, California, New Mexico, and Texas; Customs Service officials; GSA officials; and representatives of private and university groups. We also met with or had telephone discussions with several local development groups, including Mexican trucking officials. We also talked with drivers of Mexican trucks. Finally, we participated in conferences held by CVSA, the American Trucking Associations, and the International Association of Chiefs of Police, where we discussed truck safety enforcement with high-level Mexican and Canadian officials. In certain instances, we compared truck safety inspection data from fiscal year 1995 with data from calendar year 1996, relying (for both data sets) on the most recent information DOT could provide. While we recognize that comparing same-year data would present a clearer picture, the lack of such data precluded us from doing so. Finally, this report deals primarily with truck safety enforcement at border locations and does not assess the progress on other issues surrounding NAFTA, such as efforts to develop compatible truck safety rules between signatory countries. We performed our work from March 1996 to February 1997 in accordance with generally accepted government auditing standards. This report is being sent to you because of your legislative responsibilities for commercial trucking. We are also sending copies of this report to the Secretaries of Transportation and the Treasury; the Administrator, FHWA; the Administrator, General Services Administration; the Director, Office of Management and Budget; and the Commissioner, U.S. Customs Service. We will make copies available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-3650. Major contributors to this report were Marion Chastain, Paul Lacey, Daniel Ranta, James Ratzenberger, and Deena Richart. Phyllis F. Scheinberg Associate Director, Transportation Issues The Honorable Ted Stevens Chairman The Honorable Robert C. Byrd Ranking Minority Member Committee on Appropriations United States Senate The Honorable Richard C. Shelby Chairman The Honorable Frank R. Lautenberg Ranking Minority Member Subcommittee on Transportation and Related Agencies Committee on Appropriations United States Senate The Honorable John McCain Chairman The Honorable Ernest F. Hollings Ranking Minority Member Committee on Commerce, Science, and Transportation United States Senate The Honorable Kay Bailey Hutchison Chairman The Honorable Daniel K. Inouye Ranking Minority Member Subcommittee on Surface Transportation and Merchant Marine Committee on Commerce, Science, and Transportation United States Senate The Honorable John Chafee Chairman The Honorable Max Baucus Ranking Minority Member Committee on Environment and Public Works United States Senate The Honorable John W. Warner Chairman, Subcommittee on Transportation and Infrastructure Committee on Environment and Public Works United States Senate The Honorable Robert Livingston Chairman The Honorable David R. Obey Ranking Minority Member Committee on Appropriations House of Representatives The Honorable Frank R. Wolf Chairman The Honorable Martin Olav Sabo Ranking Minority Member Subcommittee on Transportation and Related Agencies Committee on Appropriations House of Representatives The Honorable Bud Shuster Chairman The Honorable James L. Oberstar Ranking Minority Member Committee on Transportation and Infrastructure House of Representatives The Honorable Tom Petri Chairman The Honorable Nick Rahall Ranking Minority Member Subcommittee on Surface Transportation Committee on Transportation and Infrastructure House of Representatives The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the results of federal and state inspections of Mexican trucks entering the United States in 1996, focusing on: (1) actions by the federal government and border states to increase truck safety enforcement at the border; and (2) the federal enforcement strategy to ensure that trucks from Mexico comply with safety standards when entering the United States. GAO noted that: (1) from January through December 1996, federal and state officials conducted more than 25,000 inspections of trucks from Mexico; (2) on average each month, about 45 percent of the vehicles were placed out of service for serious safety violations, such as for having substandard tires or for being loaded unsafely; (3) this rate compares unfavorably to the 28 percent out-of-service rate for U.S. trucks inspected across the United States in fiscal year 1995; (4) however, because inspectors target for inspection those vehicles and drivers that appear to have safety deficiencies, their selections are not random; (5) as a result, the out-of-service rates may not necessarily reflect the general condition of all vehicles; (6) although border inspection officials believe that trucks from Mexico are safer than they were in late 1995, the monthly out-of-service rates for trucks from Mexico in 1996 ranged from 39 percent to 50 percent, with no consistent trend; (7) the border states of Arizona, California, and Texas have increased their capability to inspect trucks at major border locations; (8) collectively, the three states had 93 state truck inspectors assigned to border crossing locations as of January 1997; (9) in addition, the U.S. Department of Transportation (DOT) approved 13 new temporary positions (2-year appointments) to place federal safety inspectors at major border crossing locations; (10) California, with about 24 percent of the truck traffic from Mexico, opened two large permanent inspection facilities; (11) it has the most rigorous inspection program, with the goal of inspecting, at least once every 90 days, every truck entering the state from Mexico; (12) while both Texas and Arizona, collectively with more than three-quarters of the truck traffic from Mexico, have more than doubled the number of inspectors at border crossing locations, their efforts are less comprehensive; (13) under a broad strategy to help create a "compliance mind-set" for Mexican trucks crossing into U.S. commercial zones, DOT has undertaken a number of activities to promote truck safety; (14) in February 1997, DOT announced that its program that provides grants for statewide safety enforcement activities will incorporate performance-based goals to increase truck and driver safety; and (15) also, in March 1997, DOT submitted a legislative proposal to the Congress as part of the reauthorization of the Intermodal Surface Transportation Efficiency Act that would incorporate this initiative.
Six major domestic airlines have proposed alliances in 1998. These alliances are significant in scope but vary in extent, and their details are still emerging. In sum, the three alliances would control about 70 percent of domestic traffic, as measured by the number of passengers that board a plane—enplanements. Table 1 summarizes the size and characteristics of the proposed alliances. A key characteristic of two of the alliances is extensive code-sharing. According to officials at DOJ and DOT, code-sharing agreements are forms of corporate integration that fall between outright mergers, which involve equity ownership, and traditional arm’s length agreements between airlines about such things as how they will handle tickets and baggage. Continental Airlines and Northwest Airlines announced in January 1998 that they were entering into a “strategic global alliance” that would connect the two airlines’ route systems. Under this alliance, the airlines plan to code-share flights and include each of their respective code-share partners, such as America West, Alaska Airlines, and KLM Royal Dutch Airlines. In addition, the airlines will establish reciprocity between their frequent flier programs, which means that travelers who belong to both programs will be able to combine miles from both to claim an award on either airline. The airlines will also undertake other cooperative activities, including coordinating flight schedules and marketing. Certain aspects of the alliance agreement are contingent on the successful conclusion of negotiations with Northwest’s pilots’ union. Northwest plans to buy an equity share in Continental and place it in a voting trust. In April 1998, United Airlines and Delta Air Lines announced a tentative agreement to enter into a global alliance. The United-Delta alliance would be the largest alliance in terms of its market share of passengers, but it would have no exchange of equity. Under the terms of the agreement, the two airlines plan to engage in code-sharing arrangements, reciprocal frequent flier programs, and other areas of marketing cooperation. The alliance will be implemented on the airlines’ domestic routes and expanded internationally only after obtaining the concurrence of the airlines’ alliance partners and approval by governments, where applicable. Code-sharing on flights to Europe is not currently part of the plan for this alliance because of complex governmental and alliance issues, particularly linking two current competitors—Lufthansa and SwissAir—under the same alliance. According to airline officials, the code-sharing planned for the U.S. domestic markets will probably not occur before early 1999 and is contingent on the approval of pilots at both airlines. Also in April 1998, American Airlines and US Airways announced that they had agreed on a marketing relationship that would give the customers of each airline access to the other airline’s frequent flier program. In addition, the two airlines agreed to allow reciprocal access to all domestic and international club facilities and are working to make final arrangements to cooperate in other areas. The airlines expect to implement the linkages between the two frequent flier programs by late summer 1998. The alliance will also include code-sharing by the airlines’ regional partners, American Eagle and US Airways Express, and may seek broader code-sharing, pending pilots’ approval, at a later date. The chief executive officers of both airlines have also announced that if the other two alliances are implemented, they would seek a code-sharing arrangement as a competitive response. DOJ and DOT have somewhat different statutory authorities to review the proposed alliances. In 1989, DOT’s long-standing authority to review domestic mergers and alliances transferred to DOJ. DOJ’s Antitrust Division uses its authority under the Clayton, Sherman, and Hart-Scott-Rodino acts to examine domestic alliances in which a change in ownership or code-sharing occurs. If DOJ believes an alliance is anticompetitive in whole or part, it may seek to block the agreement in federal court. Alternatively, DOJ may negotiate a consent decree that would restructure the transaction to eliminate the competitive harm. DOJ has been reviewing the Northwest-Continental alliance proposal, which was announced in January 1998. In May 1998, DOJ indicated that it also is looking at the other two alliance proposals. DOT has stated that, later this year, it also intends to study the proposed alliances under its broader authority to maintain airline competition and protect against industry concentration and excessive market domination, as well as its specific authority to prohibit unfair methods of competition in the airline industry. It will coordinate with DOJ on the alliance reviews. DOT does not have prior approval authority over an alliance. On the basis of a recommendation from an administrative law judge, DOT could issue a cease-and-desist order. Alliances could benefit consumers by increasing the number of destinations and the frequency of flights available through each partner. The airlines believe that these increases will in turn attract new passengers, allowing them to offer more frequent flights, and, if demand is substantial, more new destinations. In an alliance that includes code-sharing, such as those proposed by United and Delta and Northwest and Continental, airline route networks are effectively joined, expanding possible routings by linking two different hub-and-spoke systems. The service provided through code-sharing replicates the “seamless” travel that would be provided by a single airline, known as “on-line service.” This type of service is generally preferred by airline passengers because it allows the convenience of single ticketing and check-in. Airlines have had interline agreements, which offer many of the same services, for some time. Interline agreements provide for the mutual acceptance by the participating airlines of passenger tickets, baggage checks, and cargo waybills, as well as establish uniform procedures in these areas. However, with on-line service, connecting flights between the two code-sharing airlines are shown in the computer reservation system as occurring on one airline. Officials for the airlines see advantages to on-line service for their customers. For example, with on-line service under the alliance proposed by United and Delta, airline passengers would be able to travel from Sioux Falls, South Dakota, to Bangor, Maine, on one airline’s code, even though neither airline currently serves the entire route between these two cities. In this example, a passenger could purchase a ticket from Delta and fly on a United plane from Sioux Falls to Chicago, then to Boston, and then, on a Delta flight, to Bangor. The passenger would earn Delta frequent flier miles for the entire trip. According to Northwest and Continental executives, their alliance would result in more than 2,000 new destinations that each airline could begin marketing as its own. The American-US Airways alliance plans to initially offer only limited code-sharing on regional airline flights, and not on each partner’s flights. In addition to new destinations, combining airlines’ hub-and-spoke route networks would also result in a substantial increase in the number of flight options that each airline could offer travelers to existing destinations. Airlines contend that these expanded service options may also attract new passengers, which would then allow the airlines to offer even more frequent flights and, if demand is substantial, more new destinations. Airline officials also note that additional routing options can create some better on-line connections by substituting one airline’s connection for its partner’s when the partner has closer connection times for the customer. This could reduce travel time for some travelers. However, this benefit may be limited. For example, through the proposed alliance, Northwest and Continental officials predict shorter travel times for about 250,000 passengers, or 0.3 percent of the 81.3 million passengers potentially affected in 1997. Critics of code-sharing point out that the practice is inherently deceptive because consumers may believe they are flying on one airline only to discover that they are on another airline’s flight and because code-sharing does not necessarily expand consumer choice. These critics charge that airlines take advantage of consumers’ preferences for on-line connections by making an interline code-share connection appear in computer reservation systems to be an on-line connection. Code-share flights also have the advantage of being listed more than once on computer reservation systems. For example, in our examination of flight listings for 17 international city-pairs, we found that 19 percent of the time code-share flights were listed at least three times (once under each airline and another as an interline connection) on the first screen of the display, giving the partners a competitive advantage over other airlines operating on those routes. Even the former chairman of American Airlines and the current chairman of US Airways are reported as calling code-sharing deceptive for consumers, but have said that they will also propose a code-sharing alliance as a competitive response if the other alliances are approved. In addition to the anticipated benefits of code-sharing, all three of the proposed alliances would offer their passengers reciprocal frequent flier benefits—that is, earning and using frequent flier points on either alliance partner—and the reciprocal use of club facilities. Airline officials believe that these reciprocal benefits would increase the value of frequent flier programs by allowing consumers to pool their points and choose from more destinations and frequencies. One critic counters, however, that unless the airlines substantially increase the number of seats available for use by frequent fliers, the additional demand created by combining the programs will reduce the availability of seats and therefore the value of the frequent flier programs. While the proposed domestic alliances may benefit consumers, they also have the potential to decrease competition in dozens of nonstop markets and hundreds more one-and multiple-stop markets because, even though the alliances are not mergers, they may reduce the incentive for alliance partners to compete with each other. Many longer routes that include one or more stops are currently the most competitive because they offer the greatest number of airlines from which consumers can choose. These same routes are likely to see the largest reduction in choices among totally unaffiliated airlines and, correspondingly, the greatest potential loss in competition. Our prior work on mergers in the 1980s showed that when such competition declines, airfares tend to increase. Unlike international alliances, which largely extend domestic airlines’ route networks into areas that they could not enter by themselves, the networks of the domestic airlines generally overlap to a much greater extent, and therefore the proposed alliances pose a greater threat to competition. Because travel to and from small and medium-sized cities usually involves a stop at one or more hubs, travelers to and from these cities potentially face reduced competition and higher fares. Existing operating barriers, such as constraints on the number of available takeoff and landing slots, are likely to make any increases in concentration problematic because such barriers reduce the likelihood that other airlines will be able to enter the market and provide a competitive response. The proposed alliances could harm consumers because they may reduce the incentive for alliance partners to compete with each other. If this were to happen, airfares would likely increase and service would likely decrease. We analyzed 1997 data on the 5,000 busiest domestic airport-pair origin and destination markets—markets for air travel between two airports—to determine how these markets could be affected by the proposed alliances. If the airlines do not continue to compete on prices, we found that the number of independent airlines could decline in 1,836 of these 5,000 markets, possibly affecting the fares paid by nearly 101 million passengers out of a total of 396 million passengers. For example, the number of effective competitors between Detroit Metro Wayne County Airport and Newark International Airport would decline from two to one if Northwest and Continental do not compete with each other. In 1997, this reduction in competition would have affected the roughly 429,000 passengers who traveled on that route. While the airlines have said that their alliances have relatively few nonstop routes that overlap, these routes often serve many passengers. For example, even though the proposed alliance between United and Delta has only 34 nonstop routes that overlap, the two airlines carry about 9.7 million passengers per year on these routes. Moreover, we believe that it is important to focus on the alliances’ potential harm to competition in the hundreds of additional one-stop and two-stop markets that have overlapping routes. These routes account for most of the 1,836 markets that could be negatively affected by the proposed alliances. In our prior work on the TWA-Ozark merger, we found that after the merger, the total number of cities with direct service declined and competition decreased in many markets. The number of routes served by two or more airlines fell by 44 percent, and fares increased between 7 and 12 percent in constant dollars within 1 year. To the extent that the proposed alliances tend to behave as a single entity, similar results could occur. In contrast to this potential for harm to consumers, competition could increase in 338 of the 5,000 largest markets, affecting about 30 million passengers per year, according to our analysis of 1997 data. In these markets, two alliance partners that individually have a market share of less than 5 percent would combine to form a potentially more effective competitor against other airlines on these routes. However, the number of markets where this could occur is substantially less, and they serve substantially fewer passengers, than the markets where consumers could be harmed by the proposed alliances. Table 2 summarizes the market and passenger information for the proposed alliances. In our prior work, we stated that some international alliances may bring benefits to passengers because international and domestic airlines are able to extend their networks. However, domestic alliances are more likely than international alliances to cause concerns about competition because they often have many more overlapping routes. In a typical international alliance, a domestic airline with a domestic route network will form an alliance with a foreign airline that has a route network in its home territory. These alliances frequently contain only a few routes where the networks overlap on either a nonstop or a one-stop basis. As a result, these alliances can benefit consumers by extending the route structure for both airlines without posing a threat to competition on overlapping routes. For example, prior to the alliance between Northwest Airlines and KLM, those airlines had only two nonstop routes that overlapped, and because neither airline had a route network in the home territory of the other, there was no significant overlap of one-stop routes. In contrast, domestic airlines’ route networks tend to overlap much more. As a result, domestic alliances are potentially more harmful to consumers because competition could decline on many more routes. Service to and from small and medium-sized cities may also be harmed because the number of competing airlines would likely decline in many cases. Most routes to and from these cities involve changing planes at one or more hubs. The number of effective competitors may decline in these markets when such passengers have more than one choice of hub airports. For example, currently, four airlines travel between Appleton, Wisconsin, and Reagan Washington National Airport. Two of those airlines are Delta and United. If these airlines were to compete less because of their alliance, passengers traveling between these two cities could be harmed. Barriers that restrict entry at key airports may increase the potential for harm from the proposed alliances because they remove the threat that high fares or poor service will attract competition from established or new entrant airlines. As we have reported in the past, barriers such as slot controls—limits on the number of takeoffs and landings—at four airports in Chicago, New York, and Washington, D.C., and long-term exclusive-use gate leases at six additional airports have led to higher fares on routes to and from these airports. Such barriers make entry at those airports difficult because the incumbent airlines frequently control access to the airport’s gates. Nonincumbent airlines generally would have to sublease gates from the incumbent airline, often at less preferable times and at a higher cost than the incumbent pays. At two of the four slot-controlled airports—New York’s LaGuardia and Washington’s Reagan National—the levels of concentration by the existing dominant airline would increase substantially following the alliance. The increase at Chicago’s O’Hare and New York’s Kennedy, on the other hand, would be much more modest. Similarly, with the six airports that are gate-constrained, because the dominant airlines already control such large percentages of the available gates, the increases in concentration that would occur following the alliances are also relatively small, averaging less than 2 percent. (See table 3.) To the extent that there is an increased concentration of slots and gates, entry may become more difficult, which would further limit competition on routes to and from these airports and likely lead to higher airfares. Our previous work has shown that airlines that dominate traffic at an airport generally charge higher fares than they do at airports that they do not dominate. We have also reported that several airlines’ sales and marketing practices may make competitive entry more difficult for other airlines. Practices such as airlines’ frequent flier plans and special travel agent bonuses for booking traffic on an incumbent airline encourage travelers to choose one airline over another on the basis of factors other than the best fares. Such practices may be most important if an airline is already dominant in a given market or markets. Together, operating and marketing barriers increase the likelihood that increases in concentration will harm consumers by discouraging entry by other established or new entrant airlines, thus allowing airlines to raise their fares or reduce services. Many dimensions of each of the proposed alliances deserve close scrutiny so that decisionmakers can assess whether the potential benefits of each particular alliance outweigh its potential harmful effects. Though not an exhaustive list, we believe analysis of several key issues will help determine the extent to which each of the proposed alliances may be beneficial or detrimental, overall, to consumers. These key issues are how substantial the benefits to consumers may be, whether incentives to compete are retained, what the potential impact of the proposed alliances on certain classes of consumers and certain communities are, how international travel may be affected, and what the overall implications of the proposed alliances for competition may be. First, DOJ and DOT need to scrutinize each alliance’s claims about the benefits each brings to the public, including the underlying assumptions that each alliance is using to estimate consumer benefits. Some of the estimated increases for the growth in traffic may depend on questionable assumptions about how much new traffic can be generated by marginal additions in the frequency of flights and the number of destinations or about how many additional travelers will choose to fly to destinations through a code-sharing arrangement that is currently available through an interline connection. In addition, DOT and DOJ need to assess the competitive response by other airlines or other alliances to determine how much new traffic may be generated rather than how much passengers shift from one airline or alliance to another. Second, it is important for decisionmakers to examine the issue of whether each alliance’s partners will continue to compete with one another on price. The amount of competition may vary by alliance. Officials with United, Delta, Northwest, and Continental told us that, because the airlines will remain separate companies, they expect to set prices independently and thus compete for each passenger. The three alliances have not specifically explained their financial arrangements or how they will ensure that price competition will be preserved. If the six airlines do compete vigorously on pricing, then this competition may alleviate many of the concerns about whether consumers would be harmed by dominant airlines in particular markets using their monopoly power to raise fares. On the other hand, if the alliances reduce incentives to compete on prices, then DOJ and DOT will need to carefully examine the overlap in the alliance partners’ route structures and assess whether an alliance would create a significant number of routes with less, or no, competition. Determining the incentives will, at a minimum, likely require a review of the exact terms of the alliances’ agreements, which may be contained in proprietary documents that DOJ and DOT have access to. We also believe that a number of other issues will be important for DOT and DOJ to analyze in their reviews of these proposed alliances. These include the following: The potential impact of the proposed alliances on certain classes of consumers and certain communities. Some business travelers have recently complained about fare increases, and consumers from some small and medium-sized communities have not experienced the lower fares and/or improved services that deregulation has delivered to other parts of the country. It will be important for policymakers to determine whether these alliances could exacerbate or ameliorate these fare and/or service problems. The impact each alliance could have on consumers who travel internationally. Both of the code-sharing alliances have indicated that eventually they would like to include their international partners, thereby allowing them to offer improved service to international destinations through such benefits as new service, increased flight frequency, and better connections. International code-sharing alliances are a way of opening foreign markets to U.S. airlines that otherwise would not be able to serve these markets because of restrictions in the bilateral agreements that govern service between countries. Northwest, United, and Delta have international strategic alliances that not only feature code-sharing and other types of integration but that also have immunity from U.S. antitrust laws. This immunity has been granted in the framework of Open Skies agreements, whereby all bilateral restrictions are eliminated. We have found that partners in these strategic code-sharing agreements have had increased traffic and revenues, and that passengers benefit through decreased layover times. However, we also have found that insufficient data exist to determine whether consumers are paying higher or lower fares as a result of the alliances and what effect the alliances will have on competition and fares in the long term. Given the increasing size and scope of the alliances’ international reach, the questions we raised in our earlier report about the alliances’ effect on fares and competition could become even more urgent. The potential sources of new competition if any combination, or all, of the alliances move forward. As we mentioned earlier, the three alliances would represent about 70 percent of the domestic aviation industry. Other industries, such as automobiles, have been similarly dominated by a few firms. That industry was widely regarded as not being competitive until new sources of competition emerged from outside the domestic industry. As we noted in our previous work, new airlines may be at a disadvantage in competing with the large alliances because of the incumbents’ large route networks and other barriers resulting from their marketing practices and slot and gate constraints at major U.S. airports. Should any combination, or all three, of the alliances go forward, there may be considerable uncertainty about the ability of new airlines to compete in many markets. The same may hold true for existing U.S. airlines that lack alliance partners, whether they are older, established airlines, such as Trans World Airlines, or new entrant airlines, like Frontier. Mr. Chairman, this concludes my prepared statement. Our work was conducted in accordance with generally accepted government auditing standards. To provide data for this testimony, we contracted with Data Base Products, Inc. Data Base Products, Inc., used information submitted by all U.S. airlines to DOT for 1997 and produced various tables to our specifications. Data Base Products, Inc., makes certain adjustments to these data to correct for deficiencies, such as those noted by the DOT’s Office of the Inspector General. We did not review the company’s specific programming but did discuss with company officials the adjustments that they make. We also interviewed officials with DOT, DOJ, and each of the six major airlines contemplating domestic alliances. We would be pleased to respond to any questions that you or any Member of the Subcommittee may have. Domestic Aviation: Service Problems and Limited Competition Continue in Some Markets (GAO/T-RCED-98-176, Apr. 23, 1998). Aviation Competition: International Aviation Alliances and the Influence of Airline Marketing Practices (GAO/T-RCED-98-131, Mar. 19. 1998). Airline Competition: Barriers to Entry Continue in Some Domestic Markets (GAO/T-RCED-98-112, Mar. 5, 1998). Domestic Aviation: Barriers Continue to Limit Competition (GAO/T-RCED-98-32, Oct. 28, 1997). Airline Deregulation: Addressing the Air Service Problems of Some Communities (GAO/T-RCED-97-187, June 25, 1997). International Aviation: Competition Issues in the U.S.-U.K. Market (GAO/T-RCED-97-103, June 4, 1997). Domestic Aviation: Barriers to Entry Continue to Limit Benefits of Airline Deregulation (GAO/T-RCED-97-120, May 13, 1997). Airline Deregulation: Barriers to Entry Continue to Limit Competition in Several Key Domestic Markets (GAO/RCED-97-4, Oct. 18, 1996). Domestic Aviation: Changes in Airfares, Service, and Safety Since Airline Deregulation (GAO/T-RCED-96-126, Apr. 25, 1996). Airline Deregulation: Changes in Airfares, Service, and Safety at Small, Medium-Sized, and Large Communities (GAO/RCED-96-79, Apr. 19, 1996). International Aviation: Airline Alliances Produce Benefits, but Effect on Competition Is Uncertain (GAO/RCED-95-99, Apr. 6, 1995). Airline Competition: Higher Fares and Less Competition Continue at Concentrated Airports (GAO/RCED-93-171, July 15, 1993). Computer Reservation Systems: Action Needed to Better Monitor the CRS Industry and Eliminate CRS Biases (GAO/RCED-92-130, Mar. 20, 1992). Airline Competition: Effects of Airline Market Concentration and Barriers to Entry on Airfares (GAO/RCED-91-101, Apr. 26, 1991). Airline Competition: Industry Operating and Marketing Practices Limit Market Entry (GAO/RCED-90-147, Aug. 29, 1990). Airline Competition: Higher Fares and Reduced Competition at Concentrated Airports (GAO/RCED-90-102, July 11, 1990). Airline Deregulation: Barriers to Competition in the Airline Industry (GAO/T-RCED-89-65, Sept. 20, 1989). Airline Competition: Fare and Service Changes at St. Louis Since the TWA-Ozark Merger (GAO/RCED-88-217BR, Sept. 21, 1988). Competition in the Airline Computerized Reservation Systems (GAO/T-RCED-88-62, Sept. 14, 1988). Airline Competition: Impact of Computerized Reservation Systems (GAO/RCED-86-74, May 9, 1986). Airline Takeoff and Landing Slots: Department of Transportation’s Slot Allocation Rule (GAO/RCED-86-92, Jan. 31, 1986). Deregulation: Increased Competition Is Making Airlines More Efficient and Responsive to Consumers (GAO/RCED-86-26, Nov. 6, 1985). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the potential impact of the alliances proposed by the nation's six largest airlines, focusing on the competitive implications of the proposed alliances, including: (1) their potential benefits to consumers; (2) their potential harm to consumers; and (3) the issues that policymakers need to consider in evaluating the net effects of the proposed alliances. GAO noted that: (1) the primary potential benefits of the proposed alliances for consumers, according to airline officials, are the additional destinations and frequencies that occur when alliance partners join route networks by code-sharing; (2) with code-sharing, an airline can market its alliance partner's flights as its own and, without adding any planes, increase the number of destinations and the frequency of the flights it can offer; (3) airline officials also predict that increased frequencies and connection opportunities will spur additional demand, allowing for even more frequent flights and additional destinations; (4) the primary source of potential harm to consumers from the proposed alliances is the possibility that they will reduce competition on hundreds of domestic routes if the alliance partners do not compete with each other or compete less vigorously than they did when they were unaffiliated; (5) GAO analyzed 1997 data on the 5,000 busiest domestic airport-pair origin and destination markets--markets for air travel between two airports--to determine how these markets could be affected by the proposed alliances; (6) if all three alliances occur, GAO found that the number of independent airlines could decline on 1,836 of the 5,000 most frequently traveled domestic airline routes and potentially reduce competition for about 100 million of the 396 million domestic passengers per year; (7) in weighing the net effects of the proposed alliances, policymakers in the Department of Justice and the Department of Transportation have a difficult task because each alliance varies in its level of integration and in the scope and breadth of the combined networks; (8) however, GAO believes that if several key issues are addressed, policymakers will be better able to determine whether an alliance benefits consumers overall; (9) the first issue is whether airline partners' assumptions concerning the additional traffic and other benefits generated by the alliance are realistic; (10) second, it will be critical to determine if an alliance retains or reduces incentives for alliance partners to compete on price; and (11) if an alliance agreement reduces the incentives for partners to compete with fares in markets they both serve, then policymakers may want to examine the overlap in the alliance partners' route structures to determine whether that alliance would lead to a significant number of routes with fewer independent airlines.
USCIS is responsible for establishing immigration services policies and priorities and for the administration of immigration and naturalization adjudication functions. Its approximately 16,000 federal and contractor employees work in 250 offices worldwide, including field offices, application support centers, service centers, asylum offices, national customer service call centers, and forms centers. About 6 million applications and petitions for immigration and naturalization benefits, along with applicable fee payments, are submitted to USCIS annually. USCIS adjudicators process applications and petitions in four general categories:family-based petitions—for close relatives to immigrate, gain permanent residence, or work in the United States; employment-based petitions—for current and prospective employees to immigrate to or stay in the United States temporarily; asylum and refugee applications—for those seeking asylum or refugee naturalization applications—for those who wish to become United States citizens. USCIS is authorized to collect fees for providing adjudication and naturalization services at a level that will (1) ensure recovery of the full costs of providing all such services, including the costs of similar services provided without charge to asylum applicants, and (2) recover any additional costs associated with the administration of the fees collected. In 1968, the Immigration and Naturalization Service (INS), USCIS’s predecessor agency, began charging fees for immigration and naturalization services, depositing the fee collections into the General Fund of the Treasury as miscellaneous receipts. In 1988, Congress established the IEFA. Since 1989, application fees have been deposited in the IEFA, which is currently USCIS’s primary source of funding. Once deposited in the IEFA, the fees remain available to USCIS until expended. In fiscal year 1991, Congress directed that the IEFA also be used to fund the cost of asylum and refugee services and adjudication services provided to some immigrants at no charge, and thus, the charges for fee-paying applicants were increased to recover these costs. In December 2000, Congress authorized the establishment of a premium processing program for employment-based applications. INS proposed the establishment of a premium processing service to provide businesses with a high level of customer service and improved processing because it could not otherwise meet the demand for expeditious service to the business community without an adverse impact on other applications. According to the INS Commissioner, an optional additional fee of $1,000 for business customers would also make capital available for infrastructure improvements. In response, Congress authorized the premium processing service; set the fee at $1,000, which is to be paid in addition to the regular application fee; and specified that it be used to provide accelerated processing services to business customers and to make infrastructure improvements in the adjudications and customer service processes. The premium processing service is to provide processing of certain employment-based petitions and applications within 15 calendar days of receipt of the request for premium processing service form. Premium processing fees also are deposited in the IEFA to be available until expended. Currently, funding for USCIS comes from three fee accounts, direct appropriations, and reimbursements from other federal agencies. Table 1 shows the five funding sources and the amounts estimated for fiscal year 2008. In past years, USCIS received larger amounts of direct appropriations to pay for administrative costs and for specific projects, such as the initiative to reduce its backlog of pending applications and its business transformation program to make long-term improvements in its business processes and technology. USCIS also received direct appropriations for two other specific programs—SAVE in fiscal year 2007 and E-Verify in fiscal years 2007 and 2008. The three fee accounts provide almost 96 percent of USCIS’s total fiscal year 2008 budgetary resources. The fee review that is the subject of this report covers the application fees deposited in the IEFA, which is budgeted to provide approximately 94 percent of USCIS’s funding for fiscal year 2008. If the number of applications and fee payments received in a year is more than projected, USCIS can increase its spending authority to cover the increased workload, and thus get additional funds from the IEFA through reprogramming after it has notified Congress. Persons seeking immigration and naturalization benefits submit their applications and associated fees to one of four service centers, local offices within one of four regions, or one of two lockbox facilities, depending on the application type and geographic location of the applicant. Some applications can be filed electronically with payment by credit or debit card or electronic transfer of funds. In addition, some applications require biometric services—collecting information such as fingerprints, photographs, and signatures for background checks conducted by the Federal Bureau of Investigation (FBI)—for which a separate fee is charged. The four service centers (located in California, Nebraska, Texas, and Vermont), the National Benefits Center (located in Missouri), and many local offices receive, process, and adjudicate applications and petitions for immigration benefits. Contract employees generally are responsible for preadjudication processing steps, such as mail room operations, fee collection, data collection, and file operations. USCIS employees adjudicate the applications, that is, they make determinations about whether to approve the benefits for which an applicant has applied. Lockbox facilities located in Chicago and Los Angeles receive certain application types and the related fee payments. Banks that operate the lockbox facilities are designated by Treasury’s Financial Management Service (FMS) as financial agents of the United States. They perform services for USCIS under a memorandum of understanding between the lockbox facility, FMS, and USCIS. Lockbox facilities are responsible for mail room operations, data entry, fee collection, setting up files, and sending the files to a service center or the National Benefits Center for adjudication. In 2007, in consultation with USCIS, FMS designated the bank that operates the Chicago lockbox as the financial agent responsible for all USCIS fee collections. USCIS is in the process of moving fee collection and other preadjudication processing activities from the service centers and field offices to lockbox facilities operated by that bank by March 2011. Table 2 shows the amount and percentage of regular application fees and premium processing fees collected by site during fiscal year 2007. During fiscal year 2008, USCIS collected over $2.4 billion in regular application fees and premium processing fees for the IEFA. The fiscal year 2007 fee collections reflect an increase in application fees that occurred at the end of July 2007 and an increase in the number of application filings prior to the fee increase date by persons wanting to avoid higher fees. Application filings also increased because of the publication of a State Department Visa Bulletin announcing the availability of employment-based visas. Fees increased by a weighted average of 86 percent per application, based on the fee review completed by USCIS in February 2007. (See app. II for a list of applications and their related fees.) USCIS’s prior fee review was completed in November 1996 and resulted in fee increases that took effect in 1998. Since then, USCIS has increased fees four times (including the July 2007 increase). Table 3 shows the amount and basis for those fee increases. To perform its most recent fee review, USCIS used a commercial off-the- shelf software application to assign the costs of its immigration application processing and adjudication services. Management’s objective was to set fees that would recover funds sufficient to cover USCIS’s costs. USCIS officials told us that when developing the methodology used in the fee review, management anticipated some skepticism about the fairness of the resulting fees, so its objectives also included using methods that would distribute costs among the various application types fairly and in a way that could be readily understood by fee payers and others. The data USCIS used for its fee review came from a variety of sources. Financial data USCIS used were from USCIS’s fiscal year 2007 budget adjusted for inflation and USCIS estimates of the cost of enhancements needed to meet its responsibilities. The nonfinancial information USCIS used included historical data on application completion rates—the average time it takes to complete adjudication of an application—from its own Performance Analysis System (PAS) and the number of applications USCIS estimated it would receive each year in fiscal years 2008 and 2009. USCIS estimated that the number of applications to be received in fiscal years 2008 and 2009 would be 5.577 million per year, including applications for which no fee is charged. Fee-paying application volume was estimated to be 4.742 million yearly. USCIS also estimated that 2.196 million of the fee-paying applications would require biometric services, for which an additional fee is charged. USCIS also estimated the total annual cost of processing and adjudicating immigration applications for fiscal years 2008 and 2009. USCIS started with the fiscal year 2007 IEFA budget adjusted for costs that will not recur after fiscal year 2007, adjusted for inflation for fiscal years 2008 and 2009, and added the estimated cost of additional requirements USCIS determined it needed to enhance service, security, and infrastructure. USCIS’s resulting estimated cost for processing and adjudicating immigration benefit applications for fiscal years 2008 and 2009 is $2.329 billion, as shown in table 4. The additional requirements of $524.3 million, shown in table 4, represent staff, equipment, training, and other costs that were not previously funded and that USCIS determined it needed to improve its capabilities to meet its mission responsibilities. These additional requirements include service enhancements ($134.8 million), such as staff and training, which according to USCIS, would provide a small surge production capacity to give it the flexibility to adapt to temporary increases in filings and the ability to incrementally work to marginally shorten processing times and improve service delivery over time; security and integrity enhancements ($152 million), such as establishing a second card production facility to support day-to-day production and to comply with federal standards for contingency planning to ensure that critical systems remain available in the event of catastrophic failure; humanitarian program enhancements ($14 million) to fully fund the Cuban Haitian Entrant Program; and infrastructure enhancements ($223.4 million), such as strengthening administrative support activities and upgrading and maintaining the information technology environment to sustain current operations. As discussed in the next section (see also fig. 1), USCIS classified its total estimated costs as either direct or overhead. According to USCIS, direct costs were assigned to field offices or activities based on an identified relationship between the cost component and the field office or activity. Overhead costs were allocated to field offices primarily based on the number of FTEs assigned to each field office. The field office costs along with their allocated overhead costs were assigned to eight application processing activities and distributed as shown in table 5. The activity costs include USCIS-estimated costs for asylum and refugee services ($191 million), fee waivers and exemptions ($150 million), and biometric services ($174 million). These costs were allocated to the applications separately as described below. USCIS assigned the remaining $1.814 billion of activity costs to the expected 4.742 million fee-paying applications, resulting in a processing cost for each application type. USCIS then allocated the costs of asylum and refugee services and fee waiver and exemptions in equal dollar amounts as surcharges to each application’s processing cost to arrive at a total cost for each application type. USCIS set each application fee based on this total cost. The $174 million of “capture biometrics” activity costs were allocated evenly to the estimated 2.196 million fee-paying applications, resulting in a separate and equal biometrics fee for each application that would require biometric services. (See app. II for a list of applications and fees.) Figure 1 illustrates the cost assignment process. Although the July 2007 fee increases met management’s objective to set fees at a level to recover USCIS’s estimated costs of immigration application processing and adjudication services, the costing methodology USCIS used to develop the fees for each application type did not consistently adhere to federal accounting standards and principles and other guidance. While federal accounting standards allow flexibility for agencies to develop managerial cost accounting practices that are suited to their specific needs and operating environments, they also provide certain specific guidance based on sound cost accounting concepts. USCIS officials told us that when developing the methodology, management anticipated some skepticism about the fairness of the resulting fees, so its objectives also included using methods that would distribute costs among the various application types fairly and in a way that could be readily understood by fee payers and others. USCIS’s methodology was not consistent with federal accounting standards and principles and other guidance in the following aspects: (1) unreimbursed costs paid by other federal entities on behalf of USCIS were not included in USCIS’s estimates of total costs, (2) key assumptions and methods used for allocation of costs to activities and types of applications were not sufficiently justified, (3) assumptions about staff time spent on various activities were not supported by documented rationale or analysis, (4) the cost of premium processing services was not determined, and (5) documentation of the processes and procedures was not sufficient to ensure consistent and accurate implementation of the methodology. Because of these inconsistencies, USCIS cannot support the reasonableness of the cost assignments to the various application types. USCIS did not include the costs of lockbox services paid by Treasury or certain retirement benefits to be paid by OPM in estimating the full cost of its immigration application processing and adjudication services. USCIS officials told us that they only included costs that the agency paid directly, which met management’s objective to set fees that recover funds sufficient to cover USCIS’s costs. However according to federal accounting standards, each entity’s full cost also should incorporate the cost of goods and services that it receives from other entities free of charge. This is especially important for executive agencies when the costs constitute inputs to government goods or services provided to nonfederal entities for a fee or user charge because executive agencies should recover the full costs. OMB Circular No. A-25, which provides guidance for executive agencies on assessment of user charges, states that when not in conflict with the fee-authorizing statutes, and unless the agency has been granted an exception to the general policy, fees should be sufficient to recover the full cost of providing a service, which includes all costs to any part of the federal government, including accrued retirement costs not covered by employee contributions and the costs of collection. In other words, costs should be included regardless of which agency pays them. Congress authorized, but did not require, USCIS to recover full costs. The document USCIS prepared to describe its methodology stated that it adhered to the principles in OMB Circular No. A-25, but in the final rule for the fee adjustment, USCIS also made clear that its fees would recover only the costs of USCIS’ operations. Although not inconsistent with the legislation authorizing the fees, the scope of the costs to be recovered by USCIS, as announced in USCIS’s final rule, is inconsistent with the general guidance in OMB Circular No. A-25. USCIS did not document in its final rule, which it indicated OMB had reviewed, or elsewhere the rationale for excluding non-USCIS costs from its fee review, including whether or how it considered the executive branch policy in OMB Circular No. A-25 on recovering full costs and other factors. At the time of the fee review, USCIS had not estimated the cost of lockbox services provided by Treasury’s FMS. At our request, FMS provided information showing that it compensated two lockbox providers $20 million for services provided to USCIS in fiscal year 2007. USCIS included $2 million of these lockbox costs in the estimate of its costs of immigration application processing and adjudication services. According to a USCIS official, these costs were related to changes the lockbox facility had to make to its systems controls, for example, to accommodate additional data elements required by USCIS in application forms. Although the $18 million lockbox costs excluded is not a material amount in relation to the current $2.329 billion total costs estimated by USCIS, this lockbox cost is expected to grow significantly in the future as USCIS executes its plan to move the preadjudication processing activities, including fee collection, from service centers and field offices to lockbox facilities by March 2011. In fiscal year 2007, approximately 23 percent of fee collections were at the lockbox facilities, while almost 68 percent were at service centers. Legislative authority exists for FMS payment for lockbox services. FMS pays lockbox costs pursuant to its responsibility for collecting revenues coming into the federal government and for providing specialized receipt and disbursement services for federal agencies. FMS officials told us that federal agencies do not pay for receipting, custody, and disbursement of federal funds by FMS on the agencies’ behalf because these services are FMS’s responsibility. According to FMS officials, agencies are to reimburse FMS only for the costs of services that are considered unique to that particular agency. USCIS and FMS signed an interagency agreement in September 2008 that establishes reimbursement levels for onetime and annual costs that USCIS will pay to FMS. These costs represent the portion of FMS’s lockbox costs that are unique to USCIS. In addition, USCIS did not include in its estimated costs of immigration application processing and adjudication services some of the costs of retirement benefits—pensions and health and life insurance—to be paid by OPM on behalf of USCIS. Based on its fiscal year 2007 costs, USCIS estimated that the average annual projected cost for these benefits for fiscal years 2008 and 2009 is $47 million before taking into consideration the agency’s 14 percent increase in the number of employees from May 2007 to May 2008 and 16 percent increase in related payroll costs. Together, the estimated more than $65 million of FMS and OPM costs excluded each year represents approximately $14 per fee-paying application for fiscal years 2008 and 2009. This estimate could differ among application types if these costs were analyzed more precisely. Without consideration of these costs paid by other federal entities, USCIS is not accounting for the full costs to the federal government of USCIS’s immigration application processing and adjudication services, and USCIS’s cost data used for setting fees are incomplete. The methodology USCIS used to determine the cost and develop the fees for each application type involved various assumptions and cost assignment methods. In large part, it consisted of allocating costs on a prorated basis. USCIS did not prepare and document analyses to justify its assumptions that prorated allocations provided a reasonable distribution of those costs. While federal accounting standards do not prohibit allocating costs on a prorated basis, they list an order of preference for three cost assignment methods that should be used: (1) direct tracing of costs wherever economically feasible, in this case, to an identifiable office, activity, or application type; (2) assigning costs on a cause-and- effect basis; or (3) allocating costs on a reasonable and consistent basis when not economically feasible to assign costs directly or on a cause-and- effect basis. The standards also state that the third preference, allocation, tends to be arbitrary because there may be little correlation between the costs and the allocation base, and costing distortions often result from arbitrary allocations. Minimizing arbitrary cost allocations will improve cost information. Nevertheless, the standards allow flexibility so that management can select methods that are best suited to the organization’s needs. The standards also state that when making the selection, management should evaluate alternative costing methods and select those that provide the best results under its operating environment. In the first stage of assigning the estimated $2.329 billion annual application processing and adjudication services costs for fiscal years 2008 and 2009 (see fig. 1), USCIS officials identified $1.405 billion of direct costs and $924 million in overhead costs. USCIS considered as overhead the majority of costs of headquarters functions and some field office functions and centrally managed costs such as rent and information technology operations and maintenance. Of the $924 million of overhead costs, $732 million—31 percent of the total $2.329 billion cost—was allocated to field offices based on the number of FTEs assigned to each field office. This approach did not consider USCIS’s approximately 6,100 contract workers and used only approximately 7,900 FTEs of the total federal FTEs of about 10,400 as the basis for distributing overhead costs. Excluding the contract workers from the base could have changed the proportion of overhead costs assigned to the various field offices which introduces the potential for an additional significant effect on the allocation of those costs to activities and application types. Minimizing arbitrary cost allocations will improve cost information. Alternatives to USCIS’s FTE-based allocation of overhead to field offices might have included direct or cause-and-effect assignment. For example, software licenses for applications used by only a specific office or activity and training for specific employee groups might have been assigned directly to offices or activities causing, or driving, each of those costs rather than allocating all of them in the aggregate with other overhead costs to all field offices on the basis of FTEs. In addition, other costs, such as information technology operations, that were attributable to a particular field office or activity might have been assigned using a cause- and-effect analysis, such as consumption of services, rather than a pro rata FTE-based allocation. Federal accounting standards state that allocation should have a reasonable basis, usually a relevant common denominator. USCIS costs that cannot be assigned directly or on a cause-and-effect basis to specific field offices or activities in an economically feasible manner might have an alternative allocation basis that is more closely related to the attributes of those specific costs than to FTEs. For example, using payroll for certain employee benefits or usage for certain information technology costs might have provided a more accurate or reasonable basis for cost distribution than an FTE-based allocation. Without performing and documenting analyses to evaluate such alternatives, USCIS management cannot assure itself or others that it is using the optimal cost allocation method. Field office overhead and direct costs were assigned to eight discrete activities (see table 5) related to processing and adjudicating applications. (See fig. 1.) The activity costs were then assigned to types of applications. As part of this process, USCIS assigned 49 percent (or about $1.143 billion) of total costs to total fee-paying applications based on the amount of time it took to adjudicate an application, resulting in varying costs per application type for the adjudication activity. The adjudication time used as the basis for assigning these activity costs to types of applications was based on historical data on adjudication time from PAS, which measures the average time to complete adjudication of each application based on daily production information input by adjudicators. Thus, for the “make determination” activity, the more complex types of applications that take more time to adjudicate were assigned a higher cost, which resulted in a higher fee. This is consistent with the methodology’s general premise, which according to USCIS is that the more time spent adjudicating an application, the higher the fee. These cost assignments used the federal accounting standards’ preferred methods of cost assignment. USCIS assigned the remaining 51 percent (or about $1.186 billion) of its costs to applications in equal amounts. This type of assignment did not consider any variation in complexity or processing time between application types. A USCIS official told us that available production data for these activities were not sufficiently reliable to use as a basis for assigning costs to applications. Also, according to USCIS, pro rata allocation was used because these activities’ costs were not significantly driven by the complexity of an application type. However, USCIS did not justify its assumption that these costs were not significantly driven by complexity of application type. While recognizing that agency management should select costing methods that best meet their needs, taking into consideration the costs and benefits of reasonable alternatives, federal accounting standards recommend minimizing arbitrary cost assignment methods to help avoid inaccurate product or service costs. It is not unusual for a costing methodology to assign some costs using logical and justifiable allocation, especially if more precise assignment methods are not cost effective. However, for several significant cost elements, such as those discussed here, USCIS did not prepare the analysis necessary to demonstrate the reasonableness of or justification for the costs it allocated to each type of application. Without documented justification for USCIS’s decisions in using methods that are less precise than others available, decision makers and fee payers do not have access to information significant in determining their level of confidence in the reasonableness of USCIS’s application fees. USCIS did not document its rationale or any related analysis to justify the assumptions concerning the amount of time staff spent performing various activities, which were used to assign field office costs to activities. Although USCIS prepared a supporting document to describe the methodology it used, the document did not explain certain key assumptions and methods used in sufficient detail to justify the reasonableness of the resulting cost assignments. According to federal internal control standards, significant events are to be clearly documented. Significant events can include key decisions about assumptions and methods underlying the assignment of costs. Also, federal accounting standards require documentation of all managerial cost accounting activities, processes, and procedures used to associate costs with products, services, or activities. According to USCIS, of the $1.137 billion of direct costs assigned to field offices, $845 million was assigned to activities based on data from PAS, which include, among other things, the amount of time adjudicators spend adjudicating applications. The remaining $292 million of direct field office costs, nearly 26 percent, was assigned to activities based on management’s judgment of how much staff time was spent on each activity. For example, based on discussions with representatives from regions, service centers, and the Performance Management Branch, USCIS decided to allocate 88 percent of the costs of the immigration information officers at service centers to the “inform the public” activity and 12 percent to the “make determination” activity. These discussions, including the input of the internal experts, and the rationale linking this information and related analysis with the final cost assignments were not documented. Using the knowledge of informed experts as the basis for estimating cost assignments can be a reasonable method when reliable data about the amount of staff time spent performing various activities or other factors that drive those costs are not available. However, without clear documentation of these factors, consideration given to these factors, and the rationale for cost assignment decisions made using these factors, USCIS is not able to demonstrate the reasonableness of the resulting cost assignments. According to federal accounting standards, Congress and federal executives need cost information to make decisions about allocating federal resources, modifying programs, and evaluating program performance. However, USCIS has not determined the costs of the accelerated processing services offered through its premium processing program, which provides for the processing of certain applications within 15 calendar days, rather than the typical 2 months or more processing time. A business that wishes to hire a foreign national to come to the United States to work temporarily can pay a voluntary fee of $1,000, in addition to the regular application fee. By law, these premium processing fees are to be used to cover the costs of (1) premium processing services to business customers and (2) infrastructure improvements in the adjudications and customer service processes. Currently, USCIS is assigning all premium processing fee collections to its business transformation program to make long-term improvements to its business processes and technology. Because USCIS is not using any of the premium processing fee collections for the accelerated processing efforts, regular fee-paying applicants could be bearing part of any added cost that might be associated with processing these applications. According to USCIS officials, because the $1,000 fee is set by law, and not by USCIS, the fee was not included in the fee review. Without knowing the cost of its premium processing services, USCIS management and Congress cannot determine the extent to which the $1,000 fee would cover the costs of the agency’s premium processing services and infrastructure improvements. USCIS described the costing methodology it used for its fee review in a proposed rule announcing the impending application fee increases and in a report containing supporting documentation for its fee review, which was available to the public during the public comment period on the proposed rule. However, USCIS did not prepare the more detailed documentation called for in federal accounting standards, which would allow results to be validated and agency personnel to perform the fee- setting process in a consistent manner. According to federal accounting standards, all cost accounting processes and procedures should be documented by a manual, handbook, or guidebook of applicable accounting operations that provides instructions for procedures and practices to be followed. In accordance with management’s objective that the methodology be readily understood by fee payers and others, the documentation USCIS prepared for the public was prepared so that a third party could understand USCIS’s overall approach. However, specific procedures and information used in the fee review as a basis for assigning costs were not documented in sufficient detail to allow a knowledgeable person to carry out or replicate the procedures. For example, documentation of the multistep process USCIS used to allocate overhead costs to activities did not include the percentages used to make these allocations. Also, application completion rates (i.e., the average amount of time to complete adjudication of an application) were not described in enough detail to explain how the rates were used to assign adjudication activity costs (i.e., “make determination” costs) to the various application types. Lack of documentation of the processes and procedures makes it difficult to ensure that the methodology used to determine the costs and fees is consistent from year to year, especially when there are changes in personnel. Lack of documentation also makes it difficult to train staff in consistent and accurate application of the methodology. Further, without complete documentation reviewed and approved by management, an independent party, such as an auditor, cannot readily assess major assumptions and methods used in the process or audit the procedures to provide accountability and added assurance that the cost system is consistent with federal accounting standards and other requirements and statutes. USCIS has put accountability mechanisms in place to help ensure that it is using regular application fee collections and premium processing fee collections as it intended, and it is taking steps to improve internal control over collection of fees. USCIS has established unique codes in the financial system for specific projects, and the OCFO monitors expenditures of fee collections for those projects. Although USCIS has controls in place over fee collections, it has identified some weaknesses at the service centers. USCIS reported that it has taken some actions to strengthen service center controls in the short term, and that it is in the process of moving all preadjudication application processing and fee receipt functions from the service centers and field offices to lockbox facilities to further strengthen control over collections. USCIS has established unique codes in the financial system for its projects, and according to a USCIS official, the OCFO monitors obligations and expenditures against those codes to track spending of fee collections from both regular application fees and premium processing fees. Expenditures for the additional requirements (staff, equipment, training, etc.)— enhancements that had not been funded previously and that USCIS determined it needed to improve its capability to meet its responsibilities—are to be made from regular application fees. Expenditures for USCIS’s business transformation program, to make long- term improvements to its business processes and technology, come from premium processing fees. USCIS has established accountability mechanisms to track expenditures for the planned enhancements of $524 million. As discussed earlier, the fee schedule that became effective in July 2007 is based, in part, on USCIS’s estimated costs for these enhancements. Of the $2.329 billion that USCIS determined it needed annually to fund the cost of processing and adjudicating immigration applications for fiscal years 2008 and 2009, over 22 percent (or $524 million) represented additional staff, equipment, training, and projects included in the planned enhancements. USCIS plans to use $232 million of that amount for payroll and related costs to hire about 1,500 additional staff. The remaining $292 million is to be used for specific projects, such as the establishment of a second card facility and enhanced delivery of secure documents (permanent residence cards, employment authorization documents, and travel documents), so that the United States Postal Service can ensure that they are delivered to the proper recipients. USCIS’s OCFO has established unique project codes in the financial system for specific projects included in the planned enhancements to be financed from regular application fees. According to a USCIS official, the amounts that USCIS estimated it would spend on each of the enhancements were allocated to the applicable individual project codes. Obligations and expenditures made against those project code allocations are monitored by OCFO to help ensure that USCIS uses the increased resources to enhance its processing capabilities. A USCIS official told us that the status of each nonfinancial aspect of the enhancements, such as new staff hired and draft statements of work for contracts to be let, is also tracked and discussed at periodic USCIS management meetings. Some amounts included in the additional requirements are for items that will not recur, such as the establishment of a second card facility. The second facility, according to USCIS, is needed to support day-to-day production and to be available in the event of catastrophic failure in compliance with federal standards for contingency planning for critical systems. A USCIS official told us that any nonrecurring costs included in the current fee schedule will not be included in the baseline resources for the next fee review. As of September 30, 2008, USCIS had hired about 1,400 additional staff and had obligated or expended over $207 million of the planned $292 million for specific projects included in the enhancements. USCIS’s OCFO tracks the amount of premium processing fee collections separately from regular application fee collections so that it can dedicate premium processing fees to its business transformation program to make long-term improvements to its business processes and technology. USCIS has established an expenditure plan for its transformation program that shows the estimated annual costs of the program through fiscal year 2012. According to the program’s expenditure plan, USCIS will dedicate all anticipated premium processing fee collections to the transformation program for fiscal years 2008 through 2012. A USCIS official told us that commitments, obligations, and expenditures for transformation projects are recorded to specific codes in the financial system, and those amounts are tracked along with premium processing fees. USCIS plans to use the entire amount of premium processing fees it received in fiscal year 2008— almost $163 million—for its transformation program. During fiscal year 2008, USCIS obligated over $12 million of that amount for the transformation program. A USCIS official told us that as some planned transformation projects move closer to the awarding of contracts, amounts will be allocated to the transformation project codes so obligations and expenditures can be made for those projects. According to USCIS, a Transformation Solution Architect Task Order in the amount of $14.5 million was awarded at the beginning of November 2008. USCIS has documented and assessed its internal control activities and processes related to fee collections. Based on our review of USCIS’s internal control documentation, service center contracts, and independent auditor reports and our discussions and observations during visits to the service centers and lockbox facilities, we identified a system of controls designed to safeguard fees collected. The controls include dual custody of fee receipts, surveillance cameras in the fee collection areas, and balancing and reconciling fee collection amounts as an application moves through processing. Although controls are in place, USCIS has identified through its monitoring procedures some weaknesses at one of its service centers. Because of these weaknesses, and for other reasons, such as the lockbox facility’s flexibility to respond to unanticipated surges in application receipt volume, USCIS is in the process of moving all preadjudication application processing and fee receipt functions from the service centers and field offices to lockbox facilities. USCIS management assessed the effectiveness of its service centers’ internal controls over collection and depositing of fees in fiscal year 2007 in accordance with OMB Circular No. A-123, Management’s Responsibility for Internal Control, and found weaknesses, such as fee receipts not being deposited in a timely manner and applications and fees being stored in unsecured locations thus creating some security issues. USCIS reported that these weaknesses resulted in part from the increased workload attributable to filings by applicants attempting to beat the proposed fee increases effective in July 2007 and increased application filings because of the publication of a State Department Visa Bulletin. An influx of applications and fees exceeded service center capacity to timely issue receipts and deposit application fees. Applications were kept in temporary storage containers, or pods, in the parking lot at one service center and were not being receipted and deposited timely. USCIS has identified corrective actions, including the transition of fee collections and preadjudication processing of applications to lockbox facilities, and is in the process of implementing them. According to USCIS, other corrective actions were implemented, including physical security improvements such as installing a barbed wire fence around the perimeter of the area containing the pods and a security guard to monitor the area 24 hours a day, 7 days a week. According to USCIS, at the four service centers, controls related to the fee collection process may vary, but each location is expected to maintain policies and procedures in accordance with management’s directives. We observed certain of these controls in operation at the service centers. USCIS’s procedures require dual custody of fee receipts at all times. The procedures also state that after application and check information have been entered into USCIS’s system, the checks are to be removed from the applications and placed in the data entry clerk’s locked safe. We observed the placement of endorsed checks into small safe-type boxes. However, at one service center, the boxes did not have locks. Another control at the service centers relates to preparing the daily bank deposit. USCIS’s procedures require verification of collected fee amounts at different steps during the process. USCIS and contractor staff at one service center described the process, and we observed the documentation that was prepared to support a prior day’s bank deposit. The documentation included required items such as reconciliations, approvals, bank deposit slips, and courier signatures acknowledging courier receipt of the checks for delivery to the bank. At the lockbox facility we visited, we observed controls such as restricted access to the entire lockbox operations area, surveillance cameras in the segregated area where employees open mail and separate checks, and comparing and balancing the number of applications and amount of fee collections at each step of the process. An independent auditor reviewed controls at the lockbox facility and determined that the controls tested were effective. USCIS is in the process of moving fee collection and other preadjudication processing activities to lockbox facilities by March 2011. According to USCIS, benefits of the lockbox include reduced operational costs, a more secure environment for fee collections, centralized and expedited application and fee collection intake, and flexibility to address unanticipated surges in application receipt volume. For example, the lockbox facilities maintain a certain number of staff as temporary workers who can be called upon when needed. Application fees are intended to fund USCIS’s immigration benefit application processing operations and other related services. While USCIS has met its objective to set fees at a level sufficient to cover its estimated costs, it has not considered the costs incurred by other federal entities on USCIS’s behalf when estimating the cost of each type of application and setting fees. Also, key assumptions and methods for allocating costs to activities and application types are not sufficiently justified or documented, and USCIS does not know how the cost of accelerated processing compares to its $1,000 premium processing fee. Further, documentation that USCIS prepared to describe its costing methodology does not provide sufficient instruction for the costing processes and practices to be followed in determining the costs of each type of application. USCIS, fee payers, congressional decision makers, and others need assurance that the costing methodology used to determine the fees for individual application types provides reliable results and that the assumptions and assignment methods used are justified. A costing methodology consistent with federal accounting standards and principles and other guidance, including complete documentation of the agency’s cost assignment process and its analysis and justification for key assumptions used to estimate costs and determine application fees, could help provide that assurance. To increase confidence that its cost estimates provide a reliable basis for setting application fees, USCIS would need to analyze alternate cost assignment methods taking into consideration the costs and benefits of reasonable alternatives, generating additional operations and production data, such as information system usage and preadjudication processing time, to prepare those analyses. USCIS’s internal control monitoring procedures identified weaknesses in service center fee collection procedures that USCIS has addressed while planning the transition of its collection and preadjudicative processing functions to lockbox facilities. To help make USCIS’s costing methodology used for determining application fees consistent with federal accounting standards and principles and to strengthen the reliability of the cost assignments used to set fees, we recommend that the Secretary of Homeland Security direct the Director of USCIS to take the following four actions: identify the full cost of application processing services whether paid directly by USCIS or by other federal entities for USCIS’s benefit, such as the costs of lockbox services paid by Treasury’s FMS and certain retirement benefits to be paid to USCIS retirees by OPM; consider the full costs to the government when USCIS next reviews and sets application fees and document the rationale for decisions made about including or excluding any types of costs in the fee determination process; determine the costs of providing premium processing services to identify the extent to which the $1,000 premium processing fee would cover associated expedited processing costs and infrastructure improvements; and document the processes and procedures of the costing methodology in sufficient detail so that the specific procedures used and the data sources and cost assignment methods employed for each step in the process can be understood and replicated. To better support the reasonableness of USCIS’s assumptions and cost assignment methods, we recommend that the Secretary of Homeland Security direct the Director of USCIS to take the following two actions: analyze current cost allocation methods to evaluate whether direct or cause-and-effect assignment methods that are economically feasible or other allocation bases may offer greater precision and fully document the rationale and any related analysis for using the assumptions and cost assignment methods selected. In written comments on a draft of this report, DHS and USCIS concurred with our recommendations and reported that related actions are planned or underway. These actions, if properly implemented, should better support the reasonableness of USCIS’s assumptions and cost assignment methods and help strengthen the reliability of the cost assignments used to set fees. DHS characterized the issues raised in the draft report as mostly pertaining to documentation and analysis supporting discrete decisions by USCIS in developing its costing methodology. In this regard, DHS indicated that USCIS had substantial documentation supporting its costing methodology. As discussed in the draft report, it will also be important that available documentation and analysis is sufficient to explain the methodology to potential users, provide justification for key assumptions, and guide future program administrators in preparing future fee reviews using a consistent methodology. This level of documentation and analysis is critical to developing reliable cost information for management of fee- based programs on an ongoing basis. We are sending copies of this report to interested congressional committees, the Secretary of Homeland Security, the Acting Deputy Director of USCIS, the Inspector General of DHS, and other interested parties. This report is also available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions about this report, please contact Jeanette Franzel at (202) 512-9406 or franzelj@gao.gov or Susan J. Irving at (202) 512-8288 or irvings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To assess the consistency of U.S Citizenship and Immigration Services’ (USCIS) costing methodology with federal accounting standards and principles, including whether USCIS sufficiently justified and documented its assumptions and methods, we reviewed federal accounting standards and Office of Management and Budget (OMB) guidance on user fees. We also reviewed USCIS documents related to the fee review and its methodology. We obtained further understanding of the methodology through interviews with knowledgeable USCIS officials and staff. We reviewed the design and operation of USCIS’s cost system, its accumulation methods, and the assignment of costs involved in processing and adjudicating immigration applications. We performed a walk-through of the cost system, discussed key assumptions and decisions with USCIS officials, and performed analytical reviews. For example, we performed calculations to verify USCIS’s distribution of overhead costs. We reviewed USCIS documents describing the bases on which USCIS assigned costs to its processing activities and how individual application fees were determined. To determine whether USCIS data were sufficiently reliable for purposes of this report, we discussed data quality control procedures with agency officials and reviewed relevant documentation. To identify and assess the accountability mechanisms and internal controls that USCIS has in place over the collection and use of fees, we reviewed internal control standards and relevant USCIS documentation, and we interviewed knowledgeable USCIS officials and staff about the controls USCIS has put in place. We corroborated information obtained in the interviews by reviewing contracts for service center support operations, USCIS internal control documentation, and an independent auditor’s report on controls in place at the lockbox facility and through our visits to four USCIS service centers in California, Nebraska, Texas, and Vermont and a lockbox facility in Chicago. During those visits, we interviewed officials and staff and observed the fee collection process. Regarding controls over use of fees, we discussed with USCIS officials and staff the processes for tracking and monitoring fee collections and related expenditures. We corroborated the information obtained in the discussions by reviewing USCIS reports showing the amount of fee collections received, and we verified that related obligations and expenditures were made against the specific project codes system for selected projects. We conducted this performance audit from October 2007 through January 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Petition to Remove Conditions on Residence Application for Family Unity Benefits Application for Temporary Protected Status (first-time applicants) Applicants filing forms that require biometrics services must pay a fee of $80 in addition to the regular application fee. The Federal Financial Management Improvement Act of 1996 (FFMIA) requires, among other things, that agencies covered by the Chief Financial Officers (CFO) Act have financial management systems that substantially comply with federal accounting standards. USCIS is part of the Department of Homeland Security and must conform to the requirements of the CFO Act. Statement of Federal Financial Accounting Standards No. 4, Managerial Cost Accounting Standards and Concepts, sets forth the fundamental elements of managerial cost accounting. Cost information can be used by federal managers for budgeting and cost control, performance measurement, program evaluations, making economic choice decisions, and determining and setting fees. The standards provide guidance on allocating costs to products and services provided by federal agencies. The standards do not impose a specific methodology on federal agencies but allow flexibility to design a cost accounting system that meets the specific needs of each agency. Among other things, the CFO Act requires agencies to review fees imposed by them on a biennial basis. OMB Circular No. A-25, User Charges, contains federal policy regarding fees assessed for government services and provides information on the basis upon which user charges (i.e., fees) are to be set. OMB Circular No. A-25 provisions apply to agencies in their assessment of user charges under 31 U.S.C. § 9701 (the user fee statute). It provides that when a service or privilege confers special benefits to an identifiable recipient beyond those that accrue to the general public, a charge will be imposed to recover the full cost to the federal government for providing the special benefit. Full costs, according to OMB Circular No. A-25, include all direct and indirect costs of providing the service. OMB Circular No. A-25 also provides guidance to agencies regarding their assessment of user charges under other statutes, such as 8 U.S.C. 1356(m) to the extent OMB Circular No. A-25 is not inconsistent with those other statutes. The Comptroller General’s Standards for Internal Control in the Federal Government provides an overall framework for establishing and maintaining internal control. Management is responsible for establishing and maintaining internal control to achieve the objectives of effective and efficient operations. OMB Circular No. A-123, Management’s Responsibility for Internal Control, defines management’s responsibility for internal control in federal agencies and provides guidance to federal managers on improving the accountability and effectiveness of federal programs and operations by establishing, assessing, correcting, and reporting on internal control. In addition to the contacts named above, staff members who made key contributions to this report include Jack Warner, Assistant Director; Richard Cambosos; Abe Dymond; Emily Eischen; Fred Evans; P. Barry Grinnell; Chelsa Gurkin; Maxine Hattery; Jason Kelly; Diane Morris; Jacqueline Nowicki; Leah Probst; and Nathan Tranquilli.
The Department of Homeland Security's (DHS) U.S. Citizenship and Immigration Services (USCIS) is responsible for granting or denying immigration benefits to individuals. USCIS charges fees for the millions of immigration applications it receives each year to fund the cost of processing and adjudicating them. In February 2007, USCIS completed a study to determine the full costs of its operations and the level at which application fees should be set to recover those costs. USCIS's new fee schedule increased application fees by a weighted average of 86 percent. Almost 96 percent of USCIS's fiscal year 2008 budget of $2.6 billion was expected to have come from fees. GAO was asked to review the methodology USCIS used in its fee review and controls in place over collection and use of fees. In this report, GAO addresses the consistency of the methodology with federal accounting standards and principles and other guidance, including whether key assumptions and methods were sufficiently justified and documented. The report also addresses internal controls USCIS has in place over the collection and use of fees. In 2007, USCIS completed a fee review in which USCIS estimated the costs of its immigration application processing and adjudication services and, in accordance with management's objective, set the fees at a level to recover those costs. The methodology USCIS used in its review, however, did not consistently adhere to federal accounting standards and principles and other guidance. While federal accounting standards allow flexibility for agencies to develop managerial cost accounting practices that are suited to their needs, they also provide certain specific guidance based on sound cost accounting concepts. USCIS's methodology, for example, did not include the costs paid by other federal entities on behalf of USCIS. Federal standards and guidance also call for documentation that is sufficient to allow an understanding of and provide justification for the cost assignment processes and data used. USCIS did not adequately document the detailed processes used or sufficiently justify assumptions used in allocating costs to various activities on a prorated basis. As a result, USCIS could not show that its methods provided a reasonable distribution of the costs to the various types of applications. For instance, USCIS allocated $732 million of overhead costs (or 31 percent of total costs)--including information technology operations and maintenance--to offices based on the number of staff full-time equivalents (FTE) in each office. However, USCIS's documentation did not sufficiently justify (1) why cost allocation was used instead of other possible methods or (2) why it did not include about 6,100 contract workers and used only approximately 7,900 FTEs of the total federal FTEs of about 10,400 as the basis for allocation. USCIS also did not adequately justify the equal assignment of activity costs representing 51 percent of total costs to each application type. While such pro rata assignment of costs may be a reasonable method in some circumstances, USCIS did not document its justification for the assumptions made when deciding which costs to allocate on a prorated basis and how those costs should be allocated. Because of these inconsistencies with federal accounting standards and principles and other guidance, USCIS cannot support the reasonableness of cost assignments to the various application types. USCIS has implemented accountability mechanisms to track the use of both regular application fees as well as premium processing fees intended for specific projects. USCIS plans to use its premium processing fee collections to fund its transformation program to make long-term improvements to its business processes and technology. Through its monitoring of fee collection procedures, USCIS has identified some weaknesses at one of its service centers. It has taken actions to strengthen service center controls in the short term, and it is moving all fee receipt functions and the application processing done in preparation for adjudication to lockbox facilities to further strengthen control over collections.
Prior to 1996, agencies generally did not have the authority to adjust civil penalty maximums that were established in statute. Congress would occasionally adjust individual penalties or specific groups of penalties, but not all civil penalties. As a result, by 1990, many penalties had not been changed for decades. When the Federal Civil Penalties Inflation Adjustment Act of 1990 was enacted, Congress noted in the “Findings” section of the legislation that inflation had weakened the deterrent effect of many civil penalties. The stated purpose of the 1990 act was “to establish a mechanism that shall (1) allow for regular adjustment for inflation of civil monetary penalties; (2) maintain the deterrent effect of civil monetary penalties and promote compliance with the law; and (3) improve the collection by the Federal Government of civil monetary penalties.” However, the act did not give agencies the authority to adjust their civil penalties for inflation. Instead, the 1990 act required the President to report to Congress every 5 years on how much each covered civil penalty had to be increased to keep pace with inflation. In addition, the act required the President to report annually on penalty assessments and collections. In July 1991, the Office of Management Budget (OMB) submitted the first (and ultimately the only) report to Congress under the Federal Civil Penalties Inflation Adjustment Act describing the penalty increases needed to keep pace with inflation. Based on submissions from dozens of agencies, the report identified almost 1,000 civil monetary penalties that were covered by the act, and listed, by agency, the statutory modifications that were required to fully adjust the penalties for inflation. Also, in satisfaction of the annual reporting requirement, the report also provided information on civil penalty assessments and collections during fiscal year 1990. At the request of OMB’s Office of Federal Financial Management, the Department of the Treasury’s Financial Management Service (FMS) published those reports until 1998 (providing information on assessments and collections through fiscal year 1997). Congress abolished this annual reporting requirement as part of the Federal Reports Elimination Act of 1998. Congress amended the 1990 act in 1996, replacing the 5-year reporting obligation with a requirement that agencies publish regulations in the Federal Register adjusting each of their covered civil penalties for inflation. The act as amended required each agency’s first inflation adjustment regulation to be published by October 23, 1996, and requires the agencies to examine their covered penalties at least once every 4 years thereafter and, where possible, make penalty adjustments. However, the act limited the agencies’ initial penalty adjustments to 10 percent of the penalty amounts. The Inflation Adjustment Act also exempted penalties under the Internal Revenue Code of 1986, the Tariff Act of 1930, the Occupational Safety and Health Act of 1970, and the Social Security Act. The Inflation Adjustment Act requires agencies to follow specific procedures when making penalty adjustments. For example, section 5 of the act defines a “cost-of-living adjustment” as the following: …the percentage (if any) for each civil monetary penalty by which - (1) the Consumer Price Index for the month of June of the calendar year preceding the adjustment, exceeds (2) the Consumer Price Index for the month of June of the calendar year in which the amount of such civil monetary penalty was last set or adjusted pursuant to law. Therefore, if an agency made its first round of adjustments in October 1996 and the penalty was last set or adjusted in October 1990, the agency was required to calculate the unrounded cost-of-living adjustment by comparing the June 1995 Consumer Price Index (CPI) with the CPI for June 1990. The Inflation Adjustment Act also provides specific criteria for how agencies should round any penalty increase. Section 5 of the act says the following: Any increase determined under this subsection shall be rounded to the nearest (1) multiple of $10 in the case of penalties less than or equal to $100; (2) multiple of $100 in the case of penalties greater than $100 but less than or equal to $1,000; (3) multiple of $1,000 in the case of penalties greater than $1,000 but less than or equal to $10,000; (4) multiple of $5,000 in the case of penalties greater than $10,000 but less than or equal to $100,000; (5) multiple of $10,000 in the case of penalties greater than $100,000 but less than or equal to $200,000; and (6) multiple of $25,000 in the case of penalties greater than $200,000. For example, if a maximum civil penalty of $5,000 was last set in 1990, and there had been 17 percent inflation from June 1990 through June 1995 (the relevant time frame for an adjustment in 1996), the unrounded increase would be $850 ($5,000 times 0.17). Because the $5,000 penalty was greater than $1,000 but less than or equal to $10,000, the statute indicates that the $850 increase should be rounded to the nearest multiple of $1,000, which is $1,000. Therefore, the adjusted penalty after rounding would be $6,000. However, section 6 of the Inflation Adjustment Act states that the first penalty adjustment under these procedures “may not exceed 10 percent of such penalty.” Therefore, in the above example the $1,000 rounded increase would be limited to 10 percent of the $5,000 penalty amount, or $500. As a result, the adjusted penalty after the 10 percent cap would be $5,500. The legislative history of the Inflation Adjustment Act does not explain why Congress established the 10 percent cap, the penalty exemptions, or the particular adjustment procedures. Our previous work has indicated that the establishment and adjustment of civil penalty maximums is only one part of the penalty process. Civil penalty maximums are generally reserved for the most egregious cases (e.g., those involving willful intent to violate the law and/or fatalities). Agencies investigate potential violations and determine the amount of penalty to be sought based on a variety of factors, including the severity of the incident, whether the individual or organization involved has a previous history of violations, and the individual or organization’s ability to pay the fine. In February 2001, we reported on the implementation of a statutory provision that required federal agencies to provide small entities (e.g., small businesses and small governments) with civil penalty relief. We concluded that the requirement was being implemented by the agencies differently, and that small entities may not be receiving any more relief than larger entities. We have reported several other times on the assessment of civil penalties and the collection of civil penalty debt. For example, see the following. In August 1994, we reported on the enforcement of the Employee Retirement Income Security Act of 1974 (ERISA), noting that the Pension and Welfare Benefits Administration’s (PWBA) enforcement program could be strengthened by increasing the use of penalties authorized by the statute to deter plans from violating the law. In March 1996 we said “penalties play a key role in environmental enforcement by deterring violators and by ensuring that regulated entities are treated fairly and consistently so that no one gains a competitive advantage by violating environmental regulations.” In March 1999, we reported that the potential usefulness of civil monetary penalties in relation to noncompliant nursing homes was being hampered because of delays in the application of the sanctions by the Health Care Financing Administration. In May 2000, we reported that the Office of Pipeline Safety (OPS) had decreased the number and amount of fines while increasing the use of less severe corrective actions. We questioned this approach, and recommended that the agency determine the impact of the reduced use of fines on compliance with safety requirements. We subsequently reported that OPS had increased its use of fines. In December 2001, we reported on the growth in civil monetary penalty receivables at the Centers for Medicare and Medicaid Services (CMS). In that report, OMB stated that it has broad oversight responsibility in monitoring and evaluating governmentwide debt collection activities. However, OMB said it is the agencies’ responsibility to monitor, manage, and collect the debt, and the agency’s Office of the Inspector General’s responsibility to audit debt collection activities. We have also specifically commented on the adjustment of civil penalties for inflation. In September 1993, the National Performance Review (NPR) recommended that federal civil monetary penalties be adjusted for inflation. Specifically, NPR recommended that a “catch-up” penalty adjustment be made to bring penalties up to date, and that the need for additional inflation adjustments be automatically reassessed every 4 years. NPR estimated that implementation of the recommendation would increase federal receipts by nearly $200 million during the fiscal year 1994 through fiscal year 1999 period. In our December 1994 report on NPR, we generally agreed with the recommendation, noting that civil penalties should be periodically adjusted so that they do not lose relevancy. The objectives of this report are to determine (1) whether, as of June 30, 2002, agencies with penalties covered by the Inflation Adjustment Act had made the required penalty adjustments and (2) whether provisions in the act have prevented agencies from keeping their penalties in pace with inflation. To address the first objective, we electronically searched the Federal Register and determined whether the required penalty adjustment regulations had been published by all federal agencies that the 1991 OMB report and the 1997 Department of the Treasury report indicated had civil penalty authorities that were covered by the Inflation Adjustment Act. We defined an “agency” to be each organizational unit that was separately listed in those reports or that separately published penalty adjustment regulations in the Federal Register. We also examined the adjusted penalties and determined whether any of them were eligible for a second round of adjustments. We focused part of our analysis on six agencies with large penalty assessments in 1997 (the most recent data available)—the Environmental Protection Agency (EPA); the Mine Safety and Health Administration (MSHA) and PWBA within the Department of Labor; and the Federal Aviation Administration (FAA), the National Highway Traffic and Safety Administration (NHTSA), and the United States Coast Guard (USCG) within the Department of Transportation. We also focused on those six agencies in the second objective, comparing the amount of penalty adjustments made under the 10 percent cap with the amount of inflation that had occurred since the agencies’ penalties were last set or adjusted. As called for by the act, we used the CPI for urban workers during the month of June in the relevant years as our measure of the historical rates of inflation. We then calculated the amount of inflation that had not been accounted for by the agencies’ initial adjustments—what we refer to in this report as the “inflation gap.” We interviewed officials in each agency to determine their views regarding the effect of the 10 percent cap on their agencies’ civil penalties and enforcement efforts. We also focused on those six agencies to examine the effects of the adjustment calculation requirements and rounding rules in the statute. Specifically, we used certain commonly occurring penalty amounts to demonstrate how the statute requires the penalties to be adjusted and rounded, and developed projections of how closely the resultant penalties tracked a possible rate of inflation. Our projections assume an annual rate of inflation of 2.5 percent—about the average rate since the Inflation Adjustment Act was enacted in 1996. We focused another part of our review on the five agencies responsible for penalties that are exempted from the act’s requirements—CMS within the Department of Health and Human Services, the Occupational Safety and Health Administration (OSHA) within the Department of Labor, the U.S. Customs Service (Customs) and the Internal Revenue Service (IRS) within the Department of the Treasury, and the Social Security Administration (SSA). We interviewed officials in each of the five agencies, asking if they knew why their agencies’ penalties had been excluded, the effect of the exclusions on their ability to keep their penalties in pace with inflation, and whether they believed their penalties should now be adjusted for inflation. We also contacted officials from OMB, the Department of Justice, and FMS within the Department of the Treasury to obtain their views regarding the need for central management oversight of the act. We focused part of our review on the extent to which the Inflation Adjustment Act permits agencies to keep their civil penalties in pace with inflation. However, we made no attempt to ascertain whether any individual penalty was set at a sufficient level to deter violations of federal law or regulation. We also did not attempt to determine the extent to which the agencies’ maximum civil penalties are administered. Also, because there is no current comprehensive database that identifies each agency with civil penalty authority subject to the provisions of the Inflation Adjustment Act, we cannot be sure that we have identified all of the agencies or penalties covered by the act. We did not attempt to verify whether a penalty adjusted for inflation by an agency appropriately met the definition of a covered penalty in the Inflation Adjustment Act. We conducted our work from March 1, 2002, through September 1, 2002, at the headquarters offices of the above-mentioned agencies in accordance with generally accepted government auditing standards. We provided a draft of this report to OMB, the Department of Justice, and the Department of the Treasury for their review and comment. The comments that we received are reflected in the “Agency Comments and Our Evaluation” section of this report. Our review indicated that lack of compliance with the Inflation Adjustment Act has been widespread. As of June 2002, 16 of 80 federal agencies with civil penalties covered by the act had not adjusted any of their penalties for inflation. Only 9 of the 64 agencies that made initial penalty adjustments did so by the statutory deadline of October 23, 1996, and some of the adjustments were not made until years after the deadline. Also, 19 of the 64 agencies that made initial adjustments had not made required subsequent adjustments for eligible penalties, and several other agencies made the adjustments incorrectly. The act does not give any agency the authority or responsibility to monitor agencies’ compliance or provide guidance on its implementation. Representatives from the six agencies with covered penalties that we contacted all supported giving some federal entity that authority and responsibility. As noted previously, the Inflation Adjustment Act required each federal agency with covered civil penalties to publish a regulation in the Federal Register by October 23, 1996, making initial inflation adjustments to its civil penalties (to a maximum of 10 percent). We reviewed OMB’s 1991 report to Congress and other sources and determined that 80 federal agencies had at least one civil penalty that was covered by the act’s requirements. Our review of the Federal Register indicated that, as of June 30, 2002, 16 of the 80 agencies had not published the required penalty adjustment regulations. (See app. I for a list of the 80 agencies and which ones did and did not publish regulations.) We contacted 4 of the 16 agencies that had not published regulations, those that appeared to have multiple civil penalties and/or active civil penalty programs—the Department of Education, the Federal Energy Regulatory Commission, the Food and Drug Administration within the Department of Health and Human Services, and Customs. In separate reports published during this review, we recommended that each of the four agencies publish the required initial penalty adjustment regulations. Each of the agencies agreed to do so, and some have since published the required adjustments. Officials in these agencies said they did not know why their agencies had not adjusted their penalties earlier. Some of the penalty adjustment regulations that were published covered all of the department or agency’s civil penalties, but others covered only a particular subunit within the department or agency. For example, the Department of Agriculture’s initial inflation adjustment regulation covered eight different agencies within the department (e.g., the Agricultural Marketing Service, the Animal and Plant Health Inspection Service, and the Food Safety and Inspection Service). In contrast, nine agencies within the Department of Transportation (e.g., FAA, USCG, and NHTSA) each published separate penalty adjustment regulations. Only 9 of the 64 agencies making penalty adjustments published their regulations by the statutory deadline of October 23, 1996. Most of the other 55 agencies published their regulations by the end of 1997, but 7 agencies did not do so until 1998 or later. For example, the Office of the Attorney General within the Department of Justice did not publish its initial Inflation Adjustment Act regulation until August 30, 1999. The Wage and Hour Division within the Department of Labor’s Employment Standards Administration did not publish its initial regulation until December 7, 2001—more than 5 years after the statutory deadline. All six of the agencies that we focused on in this part of our review had published a first round of penalty adjustment regulations by June 2002. However, none of the agencies published their regulations by the October 23, 1996, statutory deadline. For example, MSHA did not publish its initial penalty adjustments until April 22, 1998—nearly 18 months after the deadline. The Inflation Adjustment Act required agencies with covered civil penalties to examine those penalties and, where possible under the act’s procedures, make at least one more round of penalty adjustments within 4 years after the initial adjustments. Therefore, if an agency published its initial penalty adjustments on October 23, 1996, it should have examined those penalties and, where possible, published a second round of adjustments by October 23, 2000. However, as we viewed the act, if the agency did not publish the initial adjustments until 2 years after the deadline (i.e., October 23, 1998), the agency was not required to publish a second round of adjustments for eligible penalties until October 23, 2002. As appendix I shows, 29 of the 64 agencies that published initial penalty adjustment regulations under the Inflation Adjustment Act had not published a second round of adjustments by June 30, 2002. However, in some cases, 4 years had not elapsed since the agencies’ initial penalty adjustments. In other cases, the agencies’ penalties were not eligible for a second round of adjustments under the procedures prescribed in the Inflation Adjustment Act. In total, 19 agencies had at least one penalty that was eligible for a second adjustment as of June 30, 2002, but the agencies had not adjusted those penalties. Among the six agencies that we focused on in this part of our review, two agencies—FAA and NHTSA—had published a second round of adjustments for all of their eligible penalties by June 30, 2002. One agency—PWBA— had no penalties that were eligible for adjustment under the Inflation Adjustment Act’s procedures. The three remaining agencies—EPA, USCG, and MSHA—had penalties that were eligible for a second round of adjustments as of June 30, 2002, but had not adjusted those penalties in a manner consistent with the act’s requirements. EPA published a second round of adjustments on June 18, 2002 (nearly 5 ½ years after its first adjustments), but later withdrew the rule after we advised EPA that the adjustments were inconsistent with the Inflation Adjustment Act’s requirements. EPA officials told us that the agency would publish another adjustment regulation in 2003. USCG could have adjusted 56 of its 122 previously adjusted penalties by June 2002. MSHA could have adjusted at least 2 of its 5 previously adjusted penalties by June 2002. In separate reports published during this review, we recommended that USCG and MSHA publish a second round of penalty adjustments, and each agency subsequently agreed to do so. Several provisions in the Inflation Adjustment Act are unclear, and agencies raised a number of questions during our review regarding some of the act’s requirements. The act does not clearly indicate whether second-round adjustments should be made within 4 years of the October 23, 1996, deadline, or within 4 years of the initial adjustment—whenever it occurred. Although it is clear that the Inflation Adjustment Act covers penalty maximums and minimums set in statute, it is not clear whether penalties set administratively by the agencies are covered by the act’s requirements. It is not clear whether the term “last set or adjusted” refers to the date an adjustment was published in the Federal Register or the date the adjustment took effect. Officials in several agencies raised questions during our review regarding how the rounding rules in the statute should be interpreted. In January 2002, the Federal Election Commission’s General Counsel developed a memo examining various interpretations of those provisions and indicating that agencies were interpreting the requirements differently. Officials in one agency said it was unclear whether future inflation adjustments should be based on the penalty prior to or after rounding. When the Inflation Adjustment Act was enacted in 1996, Congress did not give any federal agency the authority or responsibility to monitor agencies’ compliance with the act or to provide guidance to agencies on how the act should be implemented. In November 1996, at the request of OMB’s Office of Federal Financial Management (OFFM), FMS developed written guidance on the Inflation Adjustment Act and held a workshop on how the act should be implemented. As noted previously, FMS also reported on agencies’ civil penalty assessments and collections until 1998 at the request of OFFM. However, FMS has not provided any guidance to agencies on the Inflation Adjustment Act since 1996 and has never monitored agencies’ compliance with the act. In contrast, other crosscutting regulatory reform statutes make a particular executive branch agency responsible for monitoring compliance and providing guidance to other agencies. For example, the Paperwork Reduction Act gives OMB the authority and responsibility to approve agencies’ proposed information collections and to provide guidance to the agencies on how the act should be implemented. Also, the Regulatory Flexibility Act requires the Small Business Administration’s Chief Counsel for Advocacy to monitor and report at least annually on agencies’ compliance with the act. Representatives from all six of the agencies with covered penalties that we contacted supported giving some federal entity the authority and responsibility to monitor agencies’ compliance with the Inflation Adjustment Act and to provide guidance to the agencies on the act’s implementation. One representative said that FMS had been very helpful during the act’s early implementation, but since then there had been no entity that the agencies could turn to for advice and guidance. Several provisions in the Inflation Adjustment Act have limited agencies’ ability to keep their penalties in pace with inflation. The 10 percent cap on initial adjustments prevented some agencies from fully adjusting for hundreds of percent of inflation that had occurred since certain penalties were last set or adjusted by Congress. The resultant “inflation gap” cannot be corrected under this statutory authority through subsequent adjustments and, in fact, grows with each adjustment. Also, the act’s requirements on how the penalty adjustments should be calculated and rounded prevents agencies from capturing all of the inflation that occurs between adjustments, and can prevent agencies from increasing certain penalties until inflation increases by 45 percent or more. In addition, the act exempted hundreds of penalties from inflation adjustment, some of which have not been adjusted for decades. The Inflation Adjustment Act limited covered agencies’ first adjustments under the statute to 10 percent of the penalty amount. In the six agencies that we focused on in this portion of our review, all 232 initial penalty adjustments were capped at 10 percent. As table 1 shows, none of these 10 percent adjustments were sufficient to fully account for the amount of inflation that had occurred since the underlying penalties were last set or adjusted. The size of the inflation gap varied by agency and by penalty within agencies. In some cases, the cap did not severely limit the agencies’ ability to account for inflation. For example, the 10 percent adjustment that MSHA made to its five penalties in 1998 (using the June 1997 CPI) accounted for all but 4 percent of the inflation that occurred since those penalties were last adjusted in 1992. However, in other cases the 10 percent cap on agencies’ initial adjustments resulted in sizable inflation gaps. For example, one of the civil penalties that FAA adjusted in 1996 was a maximum $1,000 penalty for, among other things, possession of a firearm discovered at a baggage security checkpoint. The penalty was set in 1958 and, until 1996, had not been changed. As figure 1 illustrates, if adjusted for inflation in 1996 (using the June 1995 CPI), this penalty would have increased by more than 400 percent to $5,277. However, because the Inflation Adjustment Act limited agencies’ first adjustments to 10 percent, FAA was only able to increase this penalty by $100 to $1,100—$4,177 less than it would have been if fully adjusted for the amount of inflation that occurred from 1958 through 1995. The 10 percent cap on initial adjustments also resulted in sizable inflation gaps for several other penalties in the six selected agencies. For example, see the following. If fully adjusted for inflation in 1996, a NHTSA penalty last set in 1972 at $800,000 for a series of violations involving the failure to meet bumper standard testing criteria would have increased by 275 percent to more than $3 million. However, the 10 percent cap limited the increase to $80,000, leaving an inflation gap of more than $2.1 million. An EPA penalty last set at $25,000 in 1976 for violation of the Toxic Substances Control Act would have increased by nearly 170 percent to more than $67,000 if fully adjusted for inflation in 1995. However, the 10 percent cap meant that the penalty could only increase by $2,500, leaving an inflation gap of nearly $40,000. A PWBA penalty was last set at $100 per day in 1974 for refusal to provide information in a timely manner needed to determine compliance with certain requirements in ERISA. If fully adjusted for inflation in 1996, the penalty would have increased to more than $300 per day. However, with the 10 percent cap, the penalty could only increase by $10, leaving an inflation gap of more than $200. Because of other provisions in the Inflation Adjustment Act, the inflation gap resulting from the 10 percent cap on initial adjustments cannot be corrected under this statutory authority—and, in fact, grows with each penalty adjustment. The act defines the term “cost of living adjustment” as the percentage by which the CPI for the year preceding the adjustment exceeds the CPI for the year in which the penalty was last set or adjusted. Therefore, agencies’ second adjustments under the statute could only take into consideration the amount of inflation since the first adjustment. As a result, any inflation gap remaining as a result of the 10 percent cap becomes permanent. Furthermore, because the capped penalties are smaller than they would have been without the 10 percent restriction, the size of subsequent adjustments using that smaller base are also smaller, resulting in a widening of the inflation gaps. For example, in the previously mentioned FAA penalty, the 10 percent cap on the agency’s December 1996 adjustment resulted in an adjusted penalty of $1,100 and an inflation gap of $4,177. Under the Inflation Adjustment Act, FAA was required to examine this penalty by December 2000 and to calculate the cost of living adjustment needed to account for inflation from June 1996 through June 1999. Inflation increased by about 6 percent during this period, so the unrounded increase in this penalty would have been $66 ($1,100 times .06), resulting in an unrounded adjusted penalty of $1,166. However, FAA could not go back and recapture any of the $4,177 inflation gap that resulted from the 10 percent cap on the 1996 adjustment. As figure 2 shows, by June 1999, the $1,000 penalty set in 1958 would have been $5,750 if fully adjusted for inflation. Therefore, the inflation gap resulting from the 10 percent cap would have increased from $4,177 to $4,584 ($5,750 minus $1,166). The limited legislative history that exists regarding the 1996 amendment to the Inflation Adjustment Act does not explain why the 10 percent cap was established. Until the 1996 amendment, no earlier executive branch or congressional initiative had called for any cap on the amount of inflation adjustments. In fact, legislation passed by the House of Representatives in 1993 included a provision for an immediate one-time catch-up adjustment. Officials in the six selected agencies said that they did not know why Congress established the 10 percent cap on initial penalty adjustments. In its second inflation adjustment regulation, NHTSA expressed concern that even with two inflation adjustments, some of the agency’s penalty amounts may be inadequate because of the 10 percent cap. Specifically, NHTSA said the following: Upon review, we concluded that application of the formulae permit some of our penalties to be increased at this time. We are doing so before the passage of four years in order to enhance the deterrent effect of these penalties because of their importance to our enforcement programs. Even with these increases, these penalties appear less than adequate as a full deterrent to violations of the statutes that we enforce. For example, the maximum penalty for a related series of violations under the National Traffic and Motor Vehicle Safety Act of 1966 as amended in 1974 was $800,000. It would have increased more than threefold, to $2.45 million, in June 1996 if (fully) adjusted for inflation. However, the adjustment was capped at $880,000. Further, under this aggregate penalty ceiling, on a per vehicle basis the maximum penalty amounts to less than one dollar per vehicle where a substantial fleet was in violation of the Safety Act. We asked representatives from each of the six agencies that we focused on in this part of our review whether their agencies believed the 10 percent cap should be lifted and agencies either required or allowed to make catch- up adjustments. Although the agency representatives generally agreed that the 10 percent cap was a significant limitation on the maximum amount of the civil penalty that could be assessed on the “worst offenders,” they were generally noncommittal with regard to this issue, neither supporting nor opposing the elimination of the cap. One representative said he was not aware of any instance in which his agency had imposed its largest penalty (an $1,100 penalty for each day a violation occurred), so he did not believe a catch-up adjustment to account for lost inflation would have any effect on the agency’s enforcement actions. However, he indicated that the same situation might not be true for the agency’s other civil penalties. The Department of Labor representative said his department would not support changing the statute to require agencies to make catch-up adjustments, but said it would have no problem changing the statute to allow agencies to do so. When determining whether adjustments to their penalties are permitted, the Inflation Adjustment Act requires agencies to compare the CPI from June of the year preceding the adjustment with the CPI in June of the year in which the penalty was “last set or adjusted pursuant to law.” Therefore, if an agency made its first round of penalty adjustments in October 1996 and examined those penalties in October 2000 to determine if further adjustments were warranted, the agency would have to compare the CPI for June 1996 with the CPI for June 1999—not the most current CPI data available or even the most recent June CPI data. As figure 3 shows, this “CPI lag” feature in the statutory adjustment procedures reduces the amount of inflation that can be accounted for from 10 percent (the amount of inflation from June 1996 through June 2000) to 6.1 percent (the amount of inflation from June 1996 through June 1999). The inflation lost as a result of the CPI lag in the statute cannot be recovered later because the statute requires each subsequent adjustment to be calculated from the CPI for the year in which the penalty was last set or adjusted (i.e., June 2000 in the above example)—not from the CPI used to make the last adjustment (June 1999). Therefore, as figure 4 shows, each time an agency makes an adjustment the agency loses a year of inflation that can never be recovered. Also, the amount of inflation lost as a result of the CPI lag in the Inflation Adjustment Act increases in proportion to the frequency with which the agency makes penalty adjustments. Each time that an agency adjusts its penalties, the agency loses a year of inflation. As figure 5 illustrates, if the agency in the above example had examined and been able to adjust its penalties twice during the period from 1996 to 2000, once in 1998, and again in 2000, the agency would have only been able to consider the amount of inflation that occurred from June 1996 through June 1997 (2.3 percent) and from June 1998 through June 1999 (2.0 percent)—a total of 4.3 percent— not the full amount of inflation that occurred from June 1996 through June 2000 (10 percent) or even the amount that occurred from June 1996 through June 1999 (6.1 percent). Representatives from the six agencies with covered penalties that we focused on in this part of our review generally said the CPI lag in the Inflation Adjustment Act should be corrected. One official from Department of Labor said that it “doesn’t make much sense” to have a system in which agencies lose a year of inflation each time they make an adjustment, and supported changing the act in this area. The rounding rules in the Inflation Adjustment Act can also significantly affect the size and the timing of agencies’ penalty adjustments. As noted previously, the act requires agencies to round penalty increases to certain dollar amounts, depending on the size of the penalty (not the size of the penalty increase). Specifically, the act provides that any increase should be rounded to the nearest multiple of $10 in the case of penalties less than or equal to $100, multiple of $100 in the case of penalties greater than $100 but less than or equal to $1,000, multiple of $1,000 in the case of penalties greater than $1,000 but less than or equal to $10,000, multiple of $5,000 in the case of penalties greater than $10,000 but less than or equal to $100,000, multiple of $10,000 in the case of penalties greater than $100,000 but less than or equal to $200,000, and multiple of $25,000 in the case of penalties greater than $200,000. For example, if the CPI increased by 10 percent during the relevant period since a $7,500 penalty was last set or adjusted, the resultant penalty increase ($750) would be rounded to the nearest multiple of $1,000—which is $1,000. Therefore, the new rounded penalty would be $8,500 ($7,500 plus $1,000). Our analysis indicated that these requirements can prevent agencies from adjusting certain penalties until inflation increases substantially— sometimes 45 percent or more. At recent rates of inflation that can mean that agencies cannot make penalty adjustments for 15 years or more. For example, after a first round of adjustments in July 1997, one of PWBA’s seven civil penalty maximums at the time was $11, five were $110, and one was $1,100. Under the statute, any effort by the agency to increase its penalties during calendar year 2001 (4 years after the agency’s last adjustment) could include any increase in inflation that occurred from June 1997 through June 2000. During that period, the CPI increased by about 7.5 percent. However, as table 2 shows, multiplying each of the 1997 penalty amounts by 7.5 percent and applying the rounding rules in the act does not result in a penalty adjustment for any of the agency’s penalties. In fact, PWBA’s penalties are not eligible for an increase under the rounding rules in the Inflation Adjustment Act until the CPI increases by 45.5 percent. Assuming a 2.5 percent annual rate of inflation in the future (about the average rate since the Inflation Adjustment Act was passed in 1996), PWBA would not be able to increase any of its civil penalties for 17 years. Appendix II shows the maximum civil penalty amounts in each of the six selected agencies after the first round of adjustments, the number of penalties at each maximum penalty amount, the inflation trigger points for each penalty amount, and the number of years that would have had to elapse before those penalties could be adjusted again (assuming a 2.5 percent rate of inflation). Of the 232 penalties in the six agencies, 208 (about 90 percent) could not be adjusted under the statute within the 4-year period contemplated in the statute. Ninety-eight of the penalties (about 42 percent) could not be adjusted for at least 10 years, and 44 (about 19 percent) could not be adjusted for 17 years or more. For example, after the first round of adjustments (assuming a 2.5 percent inflation rate), see the following: Six NHTSA penalties at the $1,100 level could not be adjusted under the statute for 17 years. These penalties include statutory violations involving failure to comply with requirements to reduce traffic deaths and injuries, the tracing and recovery of stolen vehicles and component parts, and the providing of information needed to determine the crashworthiness of motor vehicles. One NHTSA penalty at the $5.50 level (for each 0.1 mile per gallon exceeding the fuel standard for automobiles under the standard times the number of those automobiles) could not be adjusted for 28 years. Two EPA penalties at the $1,100 level could not be adjusted under the statute for 17 years. These include penalties for certain violations of the Clean Water Act and the Federal Insecticide, Fungicide, and Rodenticide Act. Twenty-seven USCG penalties at the $1,100 and $110 levels could not be adjusted under the statute for 17 years. The penalties involve violations related to the reporting of marine casualties, hazardous substance discharges, bridge maintenance and operation, and other statutory violations. In general, penalties that are just over the lower end of the rounding categories (e.g., $110 or $1,100) take longer to adjust than penalties at the upper end of those categories (e.g., $1,000 or $10,000). When the agencies are finally able to adjust their penalties for inflation, the size of the adjustments permitted under the rounding rules in the statute can be significantly larger than the amount of inflation that has occurred. For example, as illustrated in table 3 for the PWBA penalties discussed above, although the CPI must increase 45.5 percent before the agency can make an adjustment, the adjustment that is ultimately provided will be twice that amount—90.9 percent. Figure 6 illustrates the 17-year period that may be required for an adjustment of the $1,100 penalty and the overcompensation that can occur because of the rounding rules. Assuming a 2.5 percent annual rate of inflation and applying the adjustment formula in the statute, in 2014 (17 years after the agency’s first adjustment) PWBA’s $1,100 penalty could be increased by $1,000 to $2,100. However, if the penalty had just kept pace with inflation (i.e., increased 2.5 percent each year for 17 years) the penalty would have only increased by about $574 to $1,674—about $426 less than the rounded adjustment pursuant to the Inflation Adjustment Act. The figure also shows that in subsequent penalty adjustments under the statute (again assuming a 2.5 percent annual rate of inflation), the size of the rounded penalty is almost always above the penalty amount if it had just kept pace with inflation. For example, applying the rounding rules, the $2,100 rounded penalty would be eligible for another $1,000 increase to $3,100 in the year 2024—10 years after the previous adjustment. However, if the original $1,100 penalty had just kept pace with inflation from 1997 through 2024 it would be $2,143—$957 less than the rounded penalty. By the fifth adjustment in 2038, the rounded civil penalty ($5,100) is projected to be more than $2,000 larger than the penalty if it had simply kept pace with inflation ($3,027). During our review, we determined that several agencies were rounding their penalty adjustments incorrectly. Specifically, the agencies were rounding the increases based on the size of the unrounded penalty increase rather than the size of the penalty. Although this method is inconsistent with the requirements of the Inflation Adjustment Act, as figure 7 shows (again using the $1,100 penalty for illustration and assuming a 2.5 percent annual rate of inflation), rounding based on the size of the increase yields more frequent results than the statutory approach (rounding based on the size of the penalty), and the results more closely track the actual changes in inflation over time. The agency could make adjustments every 2 years (as illustrated in the figure), but must do so at least once every 4 years. Although rounding based on the size of the increase produces improved results, the resulting penalty adjustments are less than they would be if the actual rates of inflation were used. For example, as figure 7 shows, by the year 2021, the penalty amount derived by rounding based on the size of the increase would be $1,480—$510 less than if the penalty had just kept pace with the projected rate of inflation ($1,990). However, virtually all of the difference between these two figures is caused by the CPI lag feature discussed earlier (in which only a portion of the amount of inflation occurring during an adjustment period is counted). As figure 8 shows, rounding penalty adjustments based on the size of the increase without the CPI lag allows the agency to make adjustments each year, and the result is a much closer fit to the projected rate of inflation. By the year 2021, the rounded penalty is only $10 more ($2,000 versus $1,990) than if it had directly kept pace with inflation. Representatives from all six of the agencies that we focused on in this part of our review strongly supported changing the rounding rules in the Inflation Adjustment Act. All of them said the rules were problematic because of their complexity and/or their effects on the agencies’ ability to make timely and accurate adjustments. Alternatives that they suggested to the current approach included rounding based on the size of the penalty increase (rather than the size of the penalty itself) and elimination of rounding altogether. The Inflation Adjustment Act requires each agency to adjust its civil penalties for inflation, but explicitly exempts penalties established under certain statutes: (1) the Social Security Act, (2) the Occupational Safety and Health Act of 1970, (3) the Internal Revenue Code of 1986, and (4) the Tariff Act of 1930. As table 4 shows, the exemptions in the act account for at least 238 penalties enforced by five federal agencies: CMS within the Department of Health and Human Services, OSHA within the Department of Labor, Customs and IRS within the Department of the Treasury, and SSA. The legislative history of the act does not indicate why these statutes were exempted from the inflation adjustment requirements. All of OSHA’s six exempted civil penalties were last adjusted by Congress in 1990. Therefore, as of June 2002, all of them were 38 percent less than if they had fully kept pace with inflation since 1990. However, as table 5 illustrates, the dates that the other agencies’ exempted penalties were last set or adjusted vary substantially. As a result, the amount of inflation that has elapsed since the agencies’ last adjustments also varies. For example, eight IRS penalties have not been changed since 1954, but three other IRS penalties were set in 1998. As a result, by June 2002 the amount of inflation that had occurred since the agency’s penalties were last set or adjusted ranged from 10 percent (for the 1998 penalties) to 569 percent (for the 1954 penalties). One Customs penalty had not been adjusted since 1879— resulting in an inflation gap of more than 1,700 percent. Overall, 142 (nearly 60 percent) of the 238 exempted penalties would need to be increased by 50 percent or more to be fully adjusted for inflation as of June 2002. Twenty-six of the penalties (about 11 percent) would need to be adjusted by at least 100 percent. These inflation gaps notwithstanding, officials in four of the five agencies with exempted penalties—CMS, IRS, Customs, and OSHA—said that their penalties did not need to be adjusted for inflation. CMS officials said that, despite their age, some of the maximum penalties in the Social Security Act are still fairly high, thereby giving the agency the flexibility it needs when deciding on the size of the penalty imposed. They also said that some of the penalties could be compounded monthly, weekly, or daily, resulting in even higher penalty maximums if needed. As a result, they said that CMS has the leverage it needs to counteract the effects of inflation on penalty amounts that were set by Congress, in some cases, decades earlier. IRS officials said that the agency’s penalties for fixed dollar amounts can be compounded daily. As a result, they said, the maximum penalty assessed could be substantial even without adjusting for inflation. In addition, they said that IRS penalties sometimes contain formulas (e.g., a percentage of the amount invested or of the amount of tax due) that implicitly account for inflation. Customs officials said that they are satisfied with the adequacy of the fixed amount penalties provided for within the Tariff Act and the deterrent effect that they provide. For example, the most commonly assessed fixed amount penalty—a $5,000 penalty for violation of 19 U.S.C. 1436 assessed against a master of a vessel, operator of a vehicle, or pilot of an aircraft for failing to comply with statutory requirements concerning report of arrival of conveyances and presentation of accurate cargo and passenger manifest information—has proven to be an effective deterrent. OSHA officials said that Congress increased the agency’s penalties seven-fold in 1990—far in excess of the amount of inflation that had occurred since those penalties were previously set in 1970. As a result, they said, the 1990 penalty amounts were still sufficient to keep their penalties in pace with the amount of inflation that has occurred since 1970. In addition, they said the agency’s policy allows penalties to be assessed on a violation-by-violation basis that allows the agency to create a multiplier effect. They indicated that this multiplier effect could raise the penalty to an amount that would exceed the inflation- adjusted levels. In contrast, SSA officials said that inflation adjustments are currently needed for at least some of their penalties because they have become eroded by inflation over time and become less effective. Civil monetary penalties are an important element of regulatory enforcement. Suitably severe maximum penalties allow agencies to punish willful and egregious violators appropriately and serve as a deterrent to future violations. However, civil penalties can lose their ability to punish and deter if unadjusted for inflation. Therefore, as we have said previously, we believe that civil penalties should be periodically adjusted for the effects of inflation so that they do not lose their relevancy. Doing so can also increase federal receipts from those penalties, perhaps by tens of millions of dollars per year. Our review indicated that the Inflation Adjustment Act limits agencies’ ability to keep their civil penalties in pace with inflation. Because of the 10 percent cap on initial penalty adjustments, some civil penalties are hundreds of percent less than they would be if fully adjusted for the amount of inflation since Congress last set or adjusted them. Viewed another way, those penalties currently represent only a fraction of their original value. The inflation gap resulting from the 10 percent cap can never be recovered under the statutory authority and grows each year. Because of the rounding rules in the statute, agencies can be prevented from making a second round of penalty adjustments until inflation increases 45 percent or more. Therefore, at recent rates of inflation, agencies may not be able to readjust their penalties for 15 years or more after their initial adjustments. Because of the way that the statute requires the agencies to use CPI data to calculate the raw adjustment, agencies will lose a year of inflation each time they make an adjustment. That lost inflation that can never be recaptured in subsequent adjustments. Also, the statute requires agencies to use CPI data that are at least 7 months old, and perhaps as much as 18 months old. Because the statute exempted certain penalties from the act’s requirements, the agencies administering those penalties are unable to make even the modest adjustments permitted as a result of the 10 percent cap, rounding rules, and CPI lag features discussed above. More than 100 of these exempted penalties have declined in value by 50 percent or more since Congress last set them. Our review also indicated widespread lack of compliance with and confusion about the Inflation Adjustment Act’s requirements. Agencies’ failure to comply with those requirements may have cost the government millions of dollars in lost penalties from individuals and organizations that are the worst offenders of health, safety, environmental, and other statutes. We believe that an agency charged with monitoring agencies’ compliance with the Inflation Adjustment Act could have identified the compliance problems earlier in the act’s implementation, and may have been able to prevent them from occurring. For example, an oversight agency could have developed a database that would determine when penalties were due for an adjustment and notified the agencies of their responsibilities under the act. The agency could also suggest ways to make implementation of the act’s requirements better or easier. For example, the agency could provide a standard format by which agencies could explain how their penalties were adjusted and list the new penalty amounts, and/or could have provided agencies with computer programs to facilitate the computation of penalty adjustments and revised penalty amounts. In addition, detailed guidance to the agencies regarding the Inflation Adjustment Act’s requirements might have prevented some of the questions and problems that have arisen during its implementation. Finally, an oversight agency could collect information regarding civil penalty assessments and collections that has been unavailable for the past 5 years. That information could help Congress understand which agencies have civil penalty authority, the extent to which certain penalties are being used, and the extent to which agencies are developing alternatives to the exemptions from the Inflation Adjustment Act and the limitations imposed by the act on their penalty adjustments. If Congress wants federal civil penalties to regain their full impact and deterrent effects, it should consider amending the Inflation Adjustment Act to require agencies to adjust their penalties for the full amount of inflation that has occurred since they were last set or adjusted by Congress. This catch-up adjustment could occur all at once or in a series of adjustments. Alternatively, Congress could amend the act to permit (but not require) agencies to make catch-up adjustments. If Congress wants federal civil penalties to be adjusted on a more timely and accurate basis, it should consider amending the Inflation Adjustment Act to allow agencies to use more current CPI data to calculate the size of penalty increases, and require that changes in the CPI be calculated without losing a year of inflation and either eliminate the rounding provisions altogether (e.g., adjust penalties for the actual amount of inflation that occurred) or change the way in which penalty increases are rounded (e.g., round based on the size of the increase rather than the size of the penalty itself). If Congress wants penalties currently exempted from the act to be covered, it should consider amending the Inflation Adjustment Act and permitting agencies to adjust those penalties for inflation. Finally, Congress should consider giving one or more executive branch agencies the authority and responsibility to monitor the act’s implementation and provide guidance to the agencies. A single agency could be made responsible for both providing guidance to agencies on the implementation of the Inflation Adjustment Act and monitoring compliance with the act. Alternatively, those functions could be given to separate agencies. The agency or agencies could also collect basic information on which agencies have civil penalty authority, the amount of penalty assessments and collections, and the agencies’ use of alternative mechanisms to increase assessments and collections. On February 11, 2003, we provided a draft of this report to OMB, the Department of Justice, and the Department of the Treasury for their review and comment. We also provided a draft for technical review to the six selected agencies with covered penalties and the five agencies with penalties not covered by the Inflation Adjustment Act. Two of the agencies with covered penalties—NHTSA and PWBA—provided us with technical comments, which we incorporated as appropriate. For example, in response to a comment from NHTSA, we clarified that both FAA and NHTSA had published a second round of penalty adjustments by June 30, 2002, for all of the agencies’ eligible penalties. On February 26, 2003, we received written comments on the draft report from the Director of the Audit Liaison Office within the Department of Justice. On behalf of the department, she suggested that we change our matter for congressional consideration to state that Congress should provide not only the authority and responsibility to monitor the act’s implementation, but also the “necessary resources.” We did not make this change because we do not believe that these roles will require significant, dedicated resources. The Director did not comment on the other proposed changes to the act’s requirements (e.g., elimination of the inflation gap created by the 10 percent cap or changes to the rounding rules). On February 27, 2003, we received written comments on the draft report from the Commissioner of FMS within the Department of the Treasury. The Commissioner said that FMS is “not the appropriate organization” for monitoring compliance with the Inflation Adjustment Act given the act’s “unique and complex features” and because such monitoring is not directly related to the agency’s responsibility for overseeing the collection of delinquent debt. He said it is FMS’s view that each federal agency is responsible for managing and collecting civil monetary penalty debt. He also said that each federal agency’s inspector general has a responsibility for overseeing agency compliance with the Inflation Adjustment Act. We agree that inspectors general can help oversee the act’s implementation within particular agencies. However, we also believe that some type of central oversight and guidance function is also needed to ensure consistency in how the act is interpreted and applied, and to gather information about civil penalty assessments and collections throughout the government. In addition, several of the departments and agencies with inspectors general did not make the required penalty adjustments—an indication that reliance on inspectors general alone may not result in improved compliance with the act. Also, at least two agencies with penalties covered by the act do not have inspectors general, so it is unclear what entities would oversee implementation in these agencies. Therefore, we did not change our matter for congressional consideration. The Commissioner of FMS also provided comments on specific sections of the draft report, which we incorporated as appropriate. For example, he suggested that we clarify that FMS developed written guidance on the Inflation Adjustment Act and held a workshop on how the act should be implemented at the request of OFFM, not at the agency’s initiative. The Commissioner did not comment on the other proposed changes to the act’s requirements. On March 7, 2003, we received written comments on the draft report from OMB staff in OFFM and the Office of the General Counsel. The OMB staff agreed with the report’s conclusions on the Inflation Adjustment Act’s requirements, namely that Congress directly assigned to each federal agency the responsibility to comply with the act’s requirements and did not assign to any agency the responsibility to provide centralized governmentwide guidance and oversight. As such, the staff said that it is the responsibility of each agency to comply with the act’s requirements, and that oversight of each agency’s compliance with the act resides first with that agency’s inspector general office. The OMB staff also said they did not agree that a centralized role of providing guidance and oversight of governmentwide compliance with the act was necessarily needed. However, they said that if it were concluded that a federal agency should take on this added responsibility, an agency other than OMB would likely be more appropriate for serving this role. As we indicated in our response to a similar comment from the Commissioner of FMS, we agree that agency inspectors general can help oversee the act’s implementation within particular agencies. However, we also believe some type of central oversight and guidance function is also needed to ensure consistency in how the act is interpreted and applied. Therefore, we did not change our matter for congressional consideration. We are sending copies of this report to the Secretary of the Treasury, the Attorney General, and the Director of OMB. We are also sending copies to each of the six agencies with covered penalties that we focused on in this review, and to each of the five agencies with penalties that are not covered by the act. It will also be available at no charge on GAO’s homepage at http://www.gao.gov. If you have any questions concerning this report, please call Curtis Copeland or me at (202) 512-6806. Major contributors to this report include Andrea Levine, Joe Santiago, John Tavares, and Michael Volpe. Tables 6 and 7 identify the departments and agencies that the Office of Management and Budget’s (OMB) 1991 report or other sources indicated have civil penalty authority and that are covered by the requirements of the Federal Civil Penalties Inflation Adjustment Act, as amended (Inflation Adjustment Act). The tables also identify the initial and subsequent penalty adjustment final rules that had been published as of June 30, 2002. Because there is no current comprehensive database that identifies each agency with civil penalty authority subject to the provisions of the Inflation Adjustment Act, we cannot be sure that we have identified all of the covered agencies or penalties. Also, the adjustment regulations listed reflect the results of our search of the Federal Register from 1996 through June 30, 2002. Other penalty adjustment regulations may have been published that we did not discover. In some cases, cabinet departments published a single rule that adjusted penalties for all subagencies/offices within the department (e.g., the Department of Agriculture’s July 31, 1997, initial adjustment). In other cases, agencies within the departments each made their own adjustments (e.g., the Department of Transportation). The phrase “not made” in a cell indicates that a required initial or subsequent adjustment had not been made as of June 30, 2002, for at least one eligible penalty. The phrase “not required” in a cell in the “subsequent adjustment” column indicates that no adjustment was required as of June 30, 2002, either because 4 years had not elapsed since the initial adjustment or because not enough inflation had occurred to permit an adjustment under the rounding rules in the statute. Table 8 illustrates, for six selected agencies, the (1) size of the agencies’ penalty amounts after the first round of adjustments, (2) the number of covered penalties at each amount, (3) the relevant rounding category in the Federal Civil Penalties Inflation Adjustment Act, as amended (Inflation Adjustment Act), for each penalty amount, (4) the amount of inflation needed to trigger a second round of penalty adjustments at that penalty amount, (5) the rounded penalty amount after adjustment, (6) the percentage increase that rounded penalty represents (when compared to the earlier amount), and (6) the number of years it will take (at 2.5 percent inflation per year) to trigger this adjustment. The amount of inflation needed to trigger an adjustment is calculated by taking half of the rounding multiple and dividing that by the size of the penalty. For example, for the $11 Pension and Welfare Benefits Administration penalty, half of the $10 rounding multiple is $5, which when divided by $11 equals 45.4 percent. As the table shows, some of the agencies’ penalties cannot be adjusted for more than 15 years under the rounding rules, and the rounded increases are twice the amount of actual inflation to trigger an adjustment. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Civil penalties are an important element of regulatory enforcement, allowing agencies to punish violators appropriately and to serve as a deterrent to future violations. In 1996, Congress enacted the Inflation Adjustment Act to require agencies to adjust certain penalties for inflation. GAO assessed federal agencies' compliance with the act and whether provisions in the act have prevented agencies from keeping their penalties in pace with inflation. As of June 2002, 16 of 80 federal agencies with civil penalties covered by the Inflation Adjustment Act had not made the required initial adjustments to their penalties. Nineteen other agencies had not made required subsequent adjustments, and several other agencies had made incorrect adjustments. The act does not give any agency the authority to monitor compliance or to provide guidance to agencies. More important, several provisions of the act have prevented some agencies from fully adjusting their penalties for inflation. One provision limited the agencies' first adjustments to 10 percent of the penalty amounts, even if the penalties were decades old and hundreds of percent behind inflation. The resultant "inflation gap" can never be corrected under the statute and grows with each subsequent adjustment. Also, the act's calculation and rounding procedures require agencies to lose a year of inflation each time they adjust their penalties, and can prevent some agencies from making adjustments until inflation increases by 45 percent or more (i.e., 15 years or more at recent rates of inflation). Finally, the act exempts penalties under certain statutes from its requirements entirely. Consequently, more than 100 exempted penalties have declined in value by 50 percent or more since Congress last set them.
Contract consolidation generally occurs when a federal agency combines in a solicitation two or more contract requirements that were previously provided to that agency under separate contracts. Agencies may achieve savings and other benefits through contract consolidation, but consolidation may limit small business opportunities to compete for federal contracts. A specific type of contract consolidation, known as bundling, has a more significant effect on small businesses’ ability to perform the consolidated contract. Bundling generally takes place when two or more requirements that were previously performed by small businesses are combined into a single solicitation and result in a contract that is likely to be unsuitable for small business award. Table 1 summarizes the definitions of consolidation and bundling. To foster small business participation in federal contracting, Congress has required agencies to take various actions to justify the use of consolidated and bundled contracts. The Small Business Act was amended in 1997 to restrict federal agencies from bundling contracts without first taking certain steps, including conducting market research and demonstrating specific cost savings. Congress first enacted consolidation of contract requirements in the National Defense Authorization Act for Fiscal Year 2004, but only for DOD, which was required to conduct market research, identify any alternative approaches that would involve a lesser degree of consolidation of contract requirements, and determine that the consolidation of contracts valued over $5 million (later increased in 2010 to $6 million), was necessary and justified. DOD could determine that an acquisition strategy was necessary and justified if the benefits of consolidation substantially exceed the benefits of alternative approaches. Congress enacted the Small Business Jobs Act of 2010 (Jobs Act), which amended the Small Business Act to require that all federal agencies justify their consolidation of contract requirements with expected values greater than $2 million, thereby lowering the dollar threshold for DOD unless it met small business goals and requiring for the first time a justification for all federal civilian agencies. new requirements for agencies to identify any negative impacts that consolidation could have on small businesses and certify that steps would be taken to include small businesses in the acquisition strategy. The 2010 amendments also included additional requirements for bundled contracts, such as requiring agencies to publicly post the rationales for each bundled contract on their websites. The National Defense Authorization Act for Fiscal Year 2013 later repealed a provision that tied the DOD dollar threshold to its achievement of small business goals, and required DOD to review consolidated contracts with expected values of over $2 million. In addition to the requirements for agencies to justify their consolidated and bundled contracts, federal law and regulations outline specific responsibilities for agencies’ small business officials and the SBA in addressing small business issues, particularly when bundling is involved. Table 2 below lists agency and SBA responsibilities for consolidated and bundled contracts. Small Business Jobs Act of 2010, Pub. L. No. 111-240, § 1313. The Jobs Act required the over $2 million threshold for DOD only until SBA determined that DOD complied with the Small Business Act’s government-wide contracting goals; if it made this determination, then the previous statutory threshold of greater than $6 million applied to DOD. The Small Business Act established an Office of Small and Disadvantaged Business Utilization (OSDBU) in each federal agency with procurement powers. The office is referred to as the Office of Small Business Programs in DOD and its components and the Office of Small Business Utilization at GSA. SBA has additional statutory responsibilities for bundled contracts, including maintaining a database on bundled contracts, reviewing savings and benefits of bundled contracts that are re-competed as bundled contracts, and reporting to the small business Congressional committees annually on contract bundling and its impact on small businesses. While various actors are engaged in the contracting process, ultimately, the contracting agencies make the final decisions to consolidate requirements. DOD and GSA accounted for more than 80 percent of the reported consolidated contracts in fiscal years 2011 and 2012, but DOD and GSA overstated their use of consolidated contracts in those two years. In turn, because these agencies account for such a high percentage of all contracts reported as consolidated, government-wide reporting on contract consolidation is not reliable. In our sample of 157 DOD and GSA contracts that were identified as consolidated in FPDS-NG, we found and agency officials confirmed that approximately 34 percent of the DOD contracts and all of the GSA contracts were miscoded and in fact were not consolidated. We also identified four consolidated DOD contracts, including one that was also bundled, that were not reported as such in FPDS-NG. In fiscal years 2011 and 2012, federal agencies reported that they had awarded 358 consolidated contracts and orders under contracts government-wide, with total obligations of approximately $3.58 billion for goods and services ranging from information technology to construction and base support services. We reviewed a sample of 157 contracts and orders from DOD and GSA and found that 48 DOD contracts and all 16 GSA contracts were miscoded as consolidated in FPDS-NG. Figure 1 shows the details of this analysis. DOD and GSA officials generally attributed the data entry errors to miscoding that was discovered when we requested contract documentation related to consolidated contracts. In most cases, the contracts should not have been reported as either consolidated or bundled in FPDS-NG. After being made aware of the errors, most DOD and GSA officials submitted corrected data to FPDS-NG. In addition to contracts that were over-reported in FPDS-NG, we identified four consolidated contracts, including one that was also bundled, that were not reported as such in FPDS-NG. The contracts were identified through various sources, including the Federal Business Opportunities (FedBizOpps) website and task orders associated with contracts identified as consolidated in FPDS-NG, and our review of eight DOD contracts that were not identified as consolidated in FPDS-NG.officials confirmed that four contracts were consolidated, but were generally unsure why they had been miscoded in FPDS-NG. In one contract, Army officials said the contract had been transferred from another contracting office and suggested it could have been a data migration issue in transferring the contract between systems. Most of the 100 DOD contracts from fiscal years 2011 and 2012 that we identified as consolidated complied with existing acquisition regulations by justifying the need to consolidate contract requirements over $6 million. Most of the contracts that did not comply were justified, but the determinations were not made by an official at a level senior enough to meet defense regulation requirements. In fiscal years 2011 and 2012, existing DOD regulations also did not fully reflect the 2010 changes in the law, including those that lowered the dollar amount at which DOD consolidations must be justified from over $6 million to over $2 million if DOD failed to meet small business goals and required the agencies to identify small business impacts. In addition, GSA had not amended its regulations for identifying and justifying consolidated contract requirements because it was waiting for SBA regulations implementing the consolidation provisions to be finalized. SBA issued the regulations in October 2013. In our review of 100 DOD consolidated contracts from fiscal years 2011 and 2012, which include the 96 contracts reported in FPDS-NG and 4 contracts identified from other sources, we found that most—82 percent— complied with requirements in the DFARS for justifying the consolidation. These provisions require defense agencies to, among other things, conduct market research, identify alternative contracting approaches that involve less consolidation, and include a determination by the senior procurement executive that the consolidation is necessary and justified. In determining if consolidation is necessary and justified for estimated contract requirements above a certain threshold, the DFARS provides that market research may indicate that the benefits of consolidation substantially exceed the benefits of the alternatives. Table 3 summarizes these requirements and the consolidated contracts we reviewed. Almost all of the DOD consolidated contracts that we reviewed, including the four that had not been reported in FPDS–NG, were supported by a memorandum stating that the consolidation was necessary and justified. These memorandums included a statement that expected benefits, including savings or other benefits, exceeded the benefits of alternative approaches, and the determination that the consolidations were necessary and justified. In 17 of the contracts we reviewed, however, the consolidation decision was authorized by DOD officials, but not at the level specified in regulations.fully comply with DFARS consolidation requirements by not identifying alternative contracting approaches that involve less consolidation. DOD consolidated contracts also did not The consolidated contracts we reviewed largely addressed expected savings and benefits in quantitative terms. According to federal law and DFARS, consolidation may be necessary and justified if the benefits of consolidation substantially exceed the benefits of alternative approaches, but the phrase “substantially exceed” is not further defined in statute or regulation. DOD guidance provides that the benefit analysis must prove that the acquisition strategy’s benefits are much greater than the benefits of the alternative approaches. In more than half of the contracts we reviewed, officials quantified cost savings in either dollar amounts or as percentages; in other cases, justifications supported cost savings without metrics or data. For example, Air Force contracting officials justified consolidating requirements in a $5.9 million contract for electronic parts repair by stating that consolidation would allow the federal government to receive more favorable unit prices and permit the contractor to address obsolescence issues more efficiently. Table 4 shows savings and benefits described in the contracts we reviewed. Consolidated contracts that are also bundled must demonstrate specific cost savings depending on the estimated value of the consolidated requirements, to justify the approach. Further, bundled contracts at DOD that are expected to exceed $8 million must include additional analysis, such as assessing the specific impediments to small business participation. The two bundled contracts we reviewed—a $288 million Navy contract identified as bundled in FPDS-NG and a $23 million Army contract identified through the FedBizOpps website—showed that the agencies both complied with these additional requirements. Specifically, Navy officials conducted a cost benefit analysis demonstrating savings of $28 million—or 10 percent—over 5 years for an aircraft maintenance, modification, and support services contract. Army officials estimated savings of $5.5 million—more than the 5 percent required—to justify a bundled construction contract. While most DOD consolidated contracts were justified in accordance with existing DFARS, these regulations have not been fully updated to reflect new provisions on consolidating contract requirements in the 2010 Jobs Act. These new provisions include requiring agencies to identify any negative impact by the consolidation strategy on contracting with small businesses and ensure that steps are taken to include small businesses in acquisition strategies. The act also included a provision for DOD to follow the $2 million threshold for consolidating contract requirements if it failed to meet small business goals. Since DOD failed to meet these goals in 2011 and 2012, the act required that DOD demonstrate in its acquisition strategy that consolidation is necessary and justified for contracts with a total value of more than $2 million. However, during the time in which the contracts we reviewed were awarded, the DFARS threshold remained at the over $6 million level, and DOD officials followed this guidance. Since agency officials were relying on consolidation requirements in the DFARS and accompanying guidance that DOD provided them at that time, we determined the extent to which they complied with those requirements. For example, a $5.9 million Navy contract that combined requirements did not have a justification because it had an estimated value below the $6 million threshold specified in DFARS. In October 2013, DOD issued instructions to lower the consolidated threshold to $2 million. DOD officials also explained that they defer to SBA, the responsible regulatory agency, to issue final rules on other changes before updating their acquisition regulations. SBA’s final rule implementing the 2010 Jobs Act was issued in October 2013 and will take effect no later than December 31, 2013. Like DOD, GSA officials noted that they were waiting for SBA to issue final regulations before implementing the consolidation requirements in the Small Business Act, as amended. However, unlike DOD, GSA was not required to justify its consolidated contracts before the Jobs Act, which enacted contract consolidation requirements for the first time for all federal agencies. Thus, GSA does not currently have any agency specific guidance providing details on its review process for identifying contracts that have consolidated requirements or a process to oversee and approve their consolidation. GSA officials explained that although GSA currently does not have its own consolidation guidance, when GSA processes consolidated contracts for DOD, a process called “interagency contracting,” GSA complies with requirements in DFARS. In anticipation of SBA rulemaking, GSA officials said they are considering DOD consolidation guidance and other information to help prepare for creating their own consolidation procedures. Our review of 100 consolidated and bundled contracts and orders issued by DOD found that slightly more than half—or 52—were awarded to small businesses. Of the 48 contracts and orders awarded to large businesses, DOD and SBA officials often addressed small business impacts through measures such as small business subcontracting plans. The Small Business Act, with regard to consolidation, requires the head of each federal agency to ensure that the agency’s decisions on consolidating contract requirements are made with a view to providing small businesses with appropriate opportunities to participate as prime contractors and subcontractors in the procurements of the federal agency. Similarly, the Small Business Act, with regard to bundling, provides that to the maximum extent practicable, agencies’ procurement strategies must facilitate the maximum participation of small businesses as prime contractors and subcontractors, and must provide opportunities for small business participation during acquisition planning and in acquisition plans. Officials told us that contracting offices work closely with agency small business representatives to address small business concerns. In our review of 100 DOD consolidated and bundled contracts, we found that 52 contracts and orders from fiscal years 2011 and 2012 were awarded to small businesses. We found that most of the contracts awarded to small businesses had been reserved for small business participation through initiatives for small business, such as small business set asides. For the 48 remaining consolidated contracts and orders that were awarded to large businesses, 30 contracts included requirements that were previously performed by small businesses. Almost all of these 30 contracts included measures to address small business participation, either by including small business set asides for related orders or subcontracting to small businesses part of their requirements. For example, one Air Force contract for environmental remediation included two options for issuing orders under the contract. One option was to allow only small businesses to compete for orders considered suitable for small business performance. The other option was to open competition for orders to both large and small businesses. We also identified consolidated contracts that used other means to address small business impacts. For example, small business officials raised concerns that a consolidated Air Force contract might include requirements previously performed by small businesses. To address these concerns, the contract that was issued specifically excluded any requirements that small businesses previously had performed. Contracting agency small business specialists have responsibilities for maximizing small business participation in federal procurement. For example, no later than 30 days before issuing a solicitation or placing an order, agencies are required to coordinate with their small business specialists when an acquisition strategy contemplates substantial bundling, unless the contract or order is set aside for small businesses. Further, the small business specialist must notify DOD’s Office of Small Business Programs if the strategy includes bundled requirements that the agency has not identified as bundled, or includes unnecessary bundling. If the strategy involves substantial bundling, the small business specialist must assist in identifying alternative strategies that would reduce or minimize the scope of bundling. In addition, DOD’s Office of Small Business Programs encourages program staff to include small business specialists in the early stages of acquisition planning. In the two bundled contracts we reviewed, the DOD and SBA small business representatives were consulted and involved in the contracting agencies’ efforts to identify small businesses capable of performing the requirements. Both contracts used subcontracting plans as the primary means to support small businesses affected by bundling. In one case, officials reported that small business participation through subcontracting was expected to be greater than it had been prior to bundling the requirements and the SBA representative said that appropriate steps were taken to protect small business interests. The Small Business Act requires SBA to track information on bundled contracts and annually report to Congress on these contracts. Specifically, SBA is required to: Maintain a database containing data and information regarding each bundled contract awarded by a federal agency and each small business concern displaced as a prime contractor as a result of such bundling; For bundled contracts that are recompeted as a bundled contract, determine the amount of savings and benefits achieved through the bundling of contract requirements, whether they would continue to be realized if the contract remains bundled, and whether the savings would be greater if procurement requirements were divided into separate solicitations suitable for award to small business concerns; and Provide an annual report to the Congressional Committees on Small Business of the House and Senate each March on the number of small business concerns displaced as prime contractors as a result of bundled contracts awarded by federal agencies and provide information related to these contracts, such as the cost savings realized by bundling over the life of the contract and the extent to which they complied with the contracting agency’s small business subcontracting plan. SBA has not submitted an annual report to Congress on bundling since fiscal year 2010, which officials attribute to an oversight. Officials also said that SBA is in the process of preparing reports for fiscal years 2011 and 2012, but did not estimate a timeline for completion. In the 2010 report, SBA provided data detailing the number of consolidated and bundled contracts awarded by federal agencies during the time period covered by the report. However, the report stated that SBA’s ability to gather and analyze contract bundling data to the extent required in the Small Business Act was limited. Officials explained that agencies and SBA primarily use FPDS-NG as their information database to identify bundled contracts, but the system does not collect the information needed to meet other statutory reporting requirements, such as the number of small businesses displaced by bundled contracts. Similarly, SBA officials said that bundled contracts are rarely recompeted as bundled contracts. Further, SBA also noted that because the requirements of a bundled contract can change over the term of the contract, it is difficult to determine the level of savings achieved. Consolidating contract requirements can help agencies achieve cost savings and other efficiencies, but these decisions must be weighed against the potential impact on small businesses. In recent years, Congress has enacted provisions of law to help address concerns that small businesses might be negatively affected by contract consolidations, including identifying small business impacts and reducing the threshold for consolidated contract justifications to $2 million. Congress has enacted requirements for consolidated contracts that apply to all civilian agencies, including GSA, and it is important that agency officials have clear and complete guidance to help navigate what can be complex decisions about whether to consolidate requirements and how to report the resulting contracts. Although the full extent of agency miscoding of consolidated and bundled contracts is unknown, having such guidance could help improve agency reporting. DOD and GSA have been awaiting SBA’s final rules on consolidated contracts to update or create corresponding guidance. Now that SBA has issued its final rule, these agencies can take the actions needed as soon as practicable. Also, by lowering the dollar threshold for consolidated contracts from $6 million to $2 million to reflect recent legislative changes, DOD ensures that its components review and justify consolidated contracts at the levels Congress has required. SBA and agency small business officials play vital roles to help agencies take steps to determine and mitigate impacts on small business as required. But SBA has not fulfilled its responsibilities to track and report to Congress on the number of bundled contracts awarded or their impacts on small business. Until SBA carries out these reporting responsibilities, Congressional oversight intended to protect small businesses may not function as intended by lawmakers. To ensure that DOD reviews and justifies consolidated contracts at the dollar thresholds established in law, we recommend that the Secretary of Defense: Update existing defense acquisition regulations and related guidance to reflect recent legislative changes that lower the dollar threshold for consolidated contracts from over $6 million to over $2 million. To make guidance for contract consolidation consistent with current law, we recommend that the Secretary of Defense and the Administrator of General Services: Act expeditiously to update or establish agency guidance for consolidated contracts after the Small Business Administration rulemaking is completed. To promote agencies’ compliance with existing law, we recommend that the Administrator of the Small Business Administration: Submit required bundling reports to Congress. We provided a draft of this report to DOD, GSA, and SBA. In their written comments, the three agencies concurred with our recommendations and provided information on actions taken or underway to address them. DOD issued a Class Deviation to its acquisition regulation in October 2013 that implements our recommendation by lowering the dollar threshold for review and justification of consolidated contracts to $2 million. DOD also plans to update its acquisition regulations for consolidated contracts after the FAR changes resulting from the SBA’s final rulemaking are complete. DOD’s letter is reprinted in appendix II. DOD also provided technical comments which we considered and incorporated into the report as appropriate. GSA agreed with our recommendation that it act expeditiously to establish agency guidance for consolidated contracts after the SBA’s rulemaking is complete. SBA recently published the applicable final rule, which takes effect no later than December 31, 2013. Through its role on the Federal Acquisition Regulatory Council, GSA is working to update the FAR to reflect the new regulation and will establish agency-specific guidance after the FAR rule takes effect, if necessary. GSA’s letter is reprinted in appendix III. In response to our recommendation that SBA submit required bundling reports to Congress, SBA reported that it is preparing the required reports for fiscal years 2011 and 2012. SBA’s letter is reprinted in appendix IV. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Administrator of General Services, and the Administrator of the Small Business Administration. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have questions about this report or need additional information, please contact me at (202) 512-4841 or woodsw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff making key contributions to the report are listed in appendix V. The National Defense Authorization Act (NDAA) for Fiscal Year 2013 mandated that we review data and information regarding consolidated contracts awarded by federal agencies. In addition, Conference Report 112–339 on the NDAA for Fiscal Year 2012 mandated us to review Department of Defense (DOD) compliance with laws and regulations addressing contract bundling and consolidation for construction and base support services. According to the Federal Procurement Data System- Next Generation (FPDS-NG), DOD and the General Services Administration (GSA) accounted for more than 80 percent of the reported use of consolidated and bundled contracts and orders awarded in fiscal years 2011 and 2012. For this report, we assessed the extent to which (1) DOD and GSA have consolidated contracts; (2) DOD and GSA justifications for contract consolidation complied with relevant laws and regulations; (3) DOD, GSA, and the Small Business Administration (SBA) addressed small business impacts of consolidation, including bundling; and (4) SBA collected and reported information on consolidated contracts that are considered bundled. To assess the extent to which DOD and GSA have consolidated contracts, we used FPDS-NG to compile data on contracts and orders awarded by federal agencies in fiscal years 2011 and 2012 that were identified as consolidated and bundled in the data system. We selected this timeframe to capture contracts with the two most recent years of data available in FPDS-NG at the time of our review. Using this data, we determined the specific number of contracts and orders awarded, including the total dollars obligated, and identified the two agencies with the greatest share of consolidated contracts. Of the 358 consolidated and bundled contracts awarded government-wide, we identified 290 contracts reported by DOD and GSA. DOD had the largest amount with 266 consolidated contracts and 8 contracts that were also bundled. GSA had 16 consolidated contracts with obligations, the second largest number reported. From the 290 DOD and GSA consolidated contracts and orders identified, we selected 157 for review through two different processes. For DOD, we selected a systematic random sample of 133 of the consolidated contracts identified in FPDS-NG. For this sample, we ensured that we had a representative proportion of contracts for base support services and construction—the two categories specified in our mandate—by reviewing the North American Industry Classification System (NAICS) code for facilities operation support to identify base support service contracts and the product service code to identify construction contracts. We also selected all 8 DOD contracts identified as bundled for a total of 141 DOD contracts. For GSA, we selected all 16 contracts that FPDS-NG identified as consolidated for our review. We contacted DOD and GSA contracting officials to confirm that the 157 selected contracts and orders identified as consolidated or bundled in FPDS-NG were correctly coded. For most of the contracts that were miscoded, contracting officials provided verification that they corrected the miscoding, such as by providing contract action reports showing updates to FPDS-NG. Of the 141 DOD contracts reviewed, DOD identified 48 as being incorrectly coded as consolidated in FPDS-NG. GSA officials reported that all 16 had been miscoded as consolidated. Overall, we confirmed 100 consolidated contracts, including two that were also bundled. In addition, to supplement our random sample of consolidated contracts, we spot checked the accuracy of contract categorization in FPDS-NG for a nonprobability selection of contracts that were coded as some other category of contract. First, we selected five contracts that DOD and GSA stated were incorrectly coded as consolidated or bundled and reviewed contract documentation to confirm that there was no indication of potential consolidation or bundling. For DOD, we judgmentally sampled four contracts that were miscoded from the defense agencies that had a base and all options value above $6 million, which is the threshold for consolidation in defense acquisition regulations. We also judgmentally selected and reviewed one of GSA’s 16 miscoded contracts that met the dollar threshold of $2 million, as set by the 2010 Small Business Jobs Act. Second, we drew a population size from the FPDS–NG using as criteria all non-consolidated and non-bundled DOD contracts awarded in fiscal years 2011 and 2012 that had the NAICS code for base support services. To draw a reasonable judgmental sample, we selected contracts awarded by the Air Force, Army, and Navy command units from which we had previously obtained consolidated contract documentation. We included contracts that met conditions that would affect whether they were considered consolidated or bundled, including contracts with a base and all options value of more than $6 million to meet the consolidation threshold in DOD guidance and contracts that might be bundled because they were performed in the United States, foreign funds were not included, and the contract was not awarded to a small business. Of the eight contracts we reviewed, one was a consolidated contract that had not been identified as such in FPDS-NG. We also reviewed contracts identified as consolidated or bundled through sources outside of FPDS-NG data. We examined the Federal Business Opportunities (FedBizOpps) archives for records of bundled contracts posted between fiscal years 2010 and 2013. Through this review we identified one bundled contract that had not been identified as such in FPDS-NG. Additionally, DOD officials confirmed two other contracts as consolidated that were not identified in our initial FPDS-NG data. Based upon finding substantial incorrect coding in FPDS-NG, we concluded that the systematic, random sample of 133 consolidated contracts could not be generalized to the universe of consolidated contracts identified via FPDS–NG in fiscal years 2011 and 2012. Therefore, we were only able to describe and attribute to those verified consolidated contracts that we reviewed. To assess the extent to which federal agencies’ justifications for consolidation comply with relevant laws and regulations, we compared contracts to the provisions of the Small Business Act, prior to the 2010 amendments, as reflected in the Defense Federal Acquisition Regulation Supplement to require agencies to demonstrate that consolidation and bundling are necessary and justified before issuing such contracts. We also reviewed regulations issued in the Federal Acquisition Regulation and guidance issued by the military departments related to consolidated and bundled contracts. In addition, we interviewed DOD and GSA contracting officials, including senior officials at DOD’s Office of Small Business Programs and GSA’s Office of Small Business Utilization, to request documents on additional guidance or training procedures to assist staff in processing consolidated and bundled contracts per the stipulations outlined in acquisition regulations. We obtained documentation of the verified consolidated and bundled contracts to check for compliance. For both consolidated and bundled contracts, we reviewed the acquisition plan, market research, the SBA small business coordination record, the justification letter with the required signature, and subcontracting plans. For bundled contracts, we additionally assessed benefit analysis and announcements on the FedBizOpps website to confirm market research occurred 30 days prior to the solicitation date. For cases in which discrepancies existed between contract documentation from the agencies and what was required under the provisions, we followed up by contacting contracting offices to request either further documentation or an explanation, or both. We also spoke with senior SBA officials and procurement center representatives (PCR) and officials from the agencies’ small business programs to confirm whether contracts were considered consolidated or bundled. We assessed the extent to which DOD and SBA address small business impacts from consolidated and bundled contracts by reviewing FPDS-NG data for the DOD contracts to determine whether they were awarded to a small business. We examined contract documentation, including acquisition strategies, consolidation memos, and coordination records, to identify consolidated contracts with requirements that were previously performed by small businesses and steps taken to address the impacts. We also interviewed responsible contracting officials, agency small business specialists, and SBA officials for selected contracts to discuss their coordination processes and actions to address small business participation. We assessed the extent to which SBA collects and reports information on bundled contracts by interviewing senior SBA officials and PCRs to assess coordination efforts with contracting officials and agency small business specialists. We also collected and reviewed SBA documents, such as bundling alert forms, to track and report consolidated and bundled contracts within the agency. We conducted this performance audit from December 2012 to November 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, W. William Russell, Assistant Director; Jennifer Dougherty; Jenny Shinn; Cheryl M. Harris; Julia Kennon; Sylvia Schatz; William Shear; Paige Smith; Danielle Green; and Roxanna Sun made key contributions to this report.
Federal agencies sometimes can achieve savings by consolidating requirements from separate, smaller contracts into fewer, larger contracts. However, consolidation may negatively impact small businesses. Generally, when consolidation makes a contract unsuitable for small businesses, the contract is considered bundled, which is a subset of consolidation. Agencies must justify their actions for both consolidated and bundled requirements. Recent National Defense Authorization Acts and a related committee report mandated that GAO review federal agency use of consolidated contracts. According to federal procurement data, DOD and GSA accounted for the vast majority of all contracts reported as consolidated in fiscal years 2011 and 2012. This report examines the extent to which (1) DOD and GSA have consolidated contracts; (2) DOD and GSA justifications complied with relevant laws and regulations; (3) DOD, GSA, and SBA addressed small business impacts as required; and (4) SBA collected and reported information on bundled contracts. GAO identified relevant laws and regulations; analyzed federal procurement data from fiscal years 2011 and 2012; reviewed consolidated, bundled, and other contracts; and interviewed DOD, GSA, and SBA officials. The Department of Defense (DOD) and the General Services Administration (GSA)--which accounted for more than 80 percent of the consolidated contracts reported by all federal agencies in fiscal years 2011 and 2012--do not know the full extent to which they are awarding consolidated contracts. This is the result of contracts being misreported in the federal procurement data system. GAO reviewed 157 contracts--more than half of all DOD and GSA contracts that were reported as consolidated--and found that 34 percent of the DOD contracts and all of the GSA contracts in fact were not consolidated. GAO also identified four DOD contracts with consolidated requirements that were not reported as such. DOD generally justified contracts with consolidated requirements in accordance with existing regulations, but DOD and GSA have not yet implemented 2010 changes in the law. GAO found that 82 percent of the 100 DOD contracts confirmed as consolidated followed existing regulations pertaining to conducting market research, identifying alternatives, and justifying decisions. Most of the contracts that did not comply were justified, but the determinations were not made by an official at a level senior enough to meet defense regulation requirements. However, DOD regulations and guidance did not reflect the reduction in the value at which consolidated contracts must be justified--from over $6 million to over $2 million--as called for in the law. In October 2013, DOD lowered the dollar threshold. DOD and GSA are waiting for the Small Business Administration (SBA) to issue a final rule to implement all of the statutory changes before updating regulations. SBA issued a final rule on October 2, 2013, which takes effect no later than December 31, 2013. DOD and SBA officials took a range of actions to address the impact of consolidation on small business. Federal law requires contracting agencies to facilitate the participation of small businesses on consolidated contracts. GAO found that half of the 100 DOD consolidated contracts reviewed were awarded to small businesses, most of which were awarded through small business set asides. Additionally, many of the consolidated contracts awarded to large businesses included measures, such as small business subcontracting plans, to address small businesses that were potentially affected by the consolidation. For the consolidated contracts considered to be bundled--for which agencies and SBA officials are specifically required to maximize small business contracting opportunities--DOD required subcontracting plans as well. SBA does not collect complete information on bundled contracts and has not reported to congressional committees as required. Federal law requires SBA to take several actions for bundled contracts, including annual reporting to the small business committees on the extent of bundling, maintaining a database to track small business impacts, and determining if benefits were achieved through bundling. SBA officials said they have not sent reports to the committees since 2010 due to an administrative oversight. Further, SBA has not collected all required information, such as the number of small businesses affected by bundled contracts. SBA officials explained that they cannot fulfill some requirements because of limitations in existing data sources, such as the federal procurement data system, which do not collect the information needed to meet reporting requirements. GAO recommends that DOD update and GSA establish guidance after SBA rulemaking is complete to reflect changes in the law and that SBA comply with congressional reporting requirements for bundled contracts. DOD, GSA, and SBA concurred with the recommendations.
SBA’s organizational structure comprises headquarters and both regional and district field offices. At the headquarters level, SBA is divided into several key functional areas that manage and set policy for the agency’s programs. As shown in figure 1, 17 headquarters offices report to the Office of the Administrator. In fiscal year 2014, the agency employed 2,137 regular funded full-time equivalent (FTE) staff (excluding staff in the Office of Advocacy, the Office of Inspector General, and the Office of Disaster Assistance) to carry out its mission of supporting small businesses. Four program offices manage the agency’s programs that provide capital, contracting, counseling, and disaster assistance services to small businesses: The Office of Capital Access administers, among other things, the 7(a) loan program. The 7(a) program is SBA’s largest loan program and guarantees a portion of loans for establishing new businesses, operating or expanding existing businesses, or acquiring businesses. The Office of Capital Access also administers the development company (504) loan program, which provides businesses with long- term, fixed-rate financing for major assets such as real estate and equipment. In fiscal year 2014, the office had 569 FTEs. The Office of Government Contracting and Business Development promotes small business participation in federal contracting through a variety of programs such as the 8(a) business development, Historically Underutilized Business Zone (HUBZone), and women- owned small business (WOSB) programs. In fiscal year 2014, the office had 180 FTEs. The Office of Entrepreneurial Development, which had 50 FTEs in fiscal year 2014, oversees a nationwide network of public and private “resource partners” that offer small business counseling and technical assistance. These include small business development centers, women’s business centers, and SCORE chapters. The Office of Disaster Assistance, which had 991 FTEs in fiscal year 2014, makes loans to businesses and families to rebuild and recover after a disaster. SBA provides its services through a network of 10 regional offices and 68 district offices that are led by the Office of Field Operations. In fiscal year 2014, there were 802 FTEs in the regional and district offices. Regional offices, whose administrators are political appointees, oversee the district offices and promote the President’s and SBA Administrator’s messages throughout the region. Considered by officials as SBA’s “boots on the ground,” district offices serve as the point of delivery for most SBA programs and services. Some district office staff work directly with SBA clients, including business opportunity, lender relations, and economic development specialists. These employees provide counseling and training services that aid in the formation, management, financing, or operation of a small business enterprise. They also provide information on and promote SBA products to lenders, the small business community, and groups such as chambers of commerce and trade associations. Additionally, district offices are charged with completing statutorily mandated reviews, such as program participant reviews that ensure participants continue to qualify and are meeting program requirements. SBA’s field structure has been revised over the years. In response to budget reductions, SBA streamlined its field structure during the 1990s by downsizing regional and district offices and shifting oversight responsibilities to headquarters. Since the early 2000s, SBA has further restructured and centralized some key agency functions. For example, from 2003 through 2006, SBA completed the centralization of its 7(a) loan processing, servicing, and liquidation functions from 68 district offices to 1 loan processing center, 2 commercial loan servicing centers, and 1 loan liquidation and guarantee purchase center. Since fiscal year 2013, SBA’s annual budget appropriation has declined. For fiscal year 2014, SBA’s appropriation was $928,975,000, approximately $116 million less than in fiscal year 2013 (see fig. 2). SBA’s fiscal year 2016 congressional budget request of $860,130,000 was approximately $27 million less than the amount enacted for fiscal year 2015. (See app. II for more information on recent trends in SBA’s budget obligations, outlays, and authority.) SBA’s spending on IT fluctuated from 2005 through 2008, as shown in figure 3. The sharp rise in spending in 2006 is attributed to increased amounts for IT investments for disaster credit management, loan and lender monitoring, and IT infrastructure. Since fiscal year 2008, the agency’s yearly IT spending has remained fairly stable at about $100 million. For fiscal year 2015, SBA estimates that it will spend approximately $109 million for IT. Of that amount, $83 million (77 percent) is to be spent on mission-critical systems to, among other things, support agency financial operations and the disaster loan assistance program. In addition, it plans to spend $21 million (19 percent) on developing, modernizing, and enhancing IT investments, while the rest is to be spent on operations and maintenance of existing systems. Government-wide, federal agencies spend more than $80 billion annually to meet their increasing demands for IT. In a 2014 report, we found that duplicative, wasteful, and low-value investments had proliferated over the years, highlighting the need for agencies to avoid such investments whenever possible. OMB has made a similar observation in its guidance. To help address their management of federal IT dollars, OMB has implemented a series of initiatives for federal agencies to use to, among other things, consolidate the growing number of federal data centers, shift to increased use of cloud computing, and promote the use of shared services. In a June 2014 report, we found that OMB’s IT reform initiatives could help to improve the efficiency and effectiveness of federal agencies and save billions of dollars. In the last 15 years, we and the SBA OIG have identified management challenges at SBA, many of which are related to specific programs. In accordance with the Reports Consolidation Act of 2000, the SBA OIG issues annual reports in which it identifies SBA’s most serious management challenges—programs or activities that it has determined pose significant risks. These annual reports represent the SBA OIG’s current assessment of SBA programs and activities that pose significant risks, including those that are particularly vulnerable to fraud, waste, error, mismanagement, or inefficiencies. The OIG’s most recent report for fiscal year 2015 identified 11 such challenges, 7 of which are issues that have persisted for 10 years or longer. These 7 long-standing challenge areas are loan guarantee purchases, the 8(a) business development program, IT security, loan agent fraud, human capital, lender oversight, and small business contracting (see fig. 4). The other challenge areas are: improper payments, the Loan Management and Accounting System, and acquisition management. Our past reports have identified some of the same long-standing management challenges that SBA needs to address, particularly in the areas of contracting, lender oversight, and the Loan Management and Accounting System (and other IT management issues). We and the SBA OIG have also identified problems in SBA’s disaster loan processing that, while not long-standing, also pose risks to the agency. To address SBA’s challenges, we and the SBA OIG have made various recommendations. While SBA has made some progress in addressing these challenge areas, many of our recommendations remain unimplemented. As of July 2015, 53 percent of the recommendations (32 of 60) we made to SBA across all subject areas in fiscal years 2010 through 2013 had not been fully addressed. See appendix III for a list of all 69 recommendations we have made to SBA since fiscal year 2000 that remain open. We maintain that these recommendations continue to have merit and should be fully implemented. Loan guarantee purchases. Loan guarantee purchases occur when lenders request that SBA purchase the guarantee following loan liquidation or delinquency. The SBA OIG has cited issues related to loan guarantee purchases as a serious management challenge since fiscal year 2000. For example, in its fiscal year 2011 management challenges report, the OIG stated that its audits of defaulted loans and SBA’s guarantee purchase and liquidation processes showed that reviews performed by the agency’s loan centers did not consistently detect lenders’ failures to administer loans in full compliance with SBA requirements and prudent lending practices, resulting in improper payments. In its fiscal year 2012 management challenges report, the OIG stated that in the last decade, the agency had made significant progress in improving deficiencies identified in SBA loan liquidation and guarantee purchase processes but that a significant deficiency continued to exist in the area of quality assurance. The report noted that while SBA had developed a quality assurance program, additional work remained before the agency could demonstrate that all elements of the program had been completed and followed. In its fiscal year 2015 report on management challenges, the SBA OIG found that SBA had made significant progress in developing and implementing a quality control program for all of its loan centers to verify and document compliance with the loan process, from origination to close- out, and to identify where material deficiencies might exist so that remedial action could be taken. Further, the OIG stated that SBA had (1) developed and documented quality program manuals and review checklists for each center; (2) assessed center functions by risk to prioritize required quality control reviews; (3) refined feedback, training, and reporting processes; and (4) developed new systems to improve the tracking of quality control deficiencies and corrective actions. However, the SBA OIG noted that SBA would need to continue monitoring the quality control program during fiscal year 2015 to verify that (1) required quality control reviews were being completed, (2) quality control activities provided adequate coverage over loan center operations, and (3) quality control reviews were effective at identifying and correcting material deficiencies. 8(a) business development program. The 8(a) business development program is a business assistance program for small disadvantaged businesses. The SBA OIG has identified issues related to this program as a serious management challenge since fiscal year 2000. For example, in its fiscal year 2003 report on management challenges (and in every report since), the OIG noted that SBA needed to modify the 8(a) business development program so more firms received business development assistance, standards for determining economic disadvantage were justifiable, and SBA ensured that firms followed 8(a) regulations when completing contracts. In a 2003 report on management challenges, we found related problems with the 8(a) program. For instance, SBA had begun to implement short- and long-term strategies to address problems in the 8(a) program, but data suggested that only a few firms continued to receive the bulk of 8(a) funding and that the volume of federal procurement funding awarded to 8(a) firms had not increased. Further, in a March 2010 report on the 8(a) program, we found that although SBA had implemented new procedures for the program, there were inconsistencies and weaknesses in internal controls that increased the potential for abuse by ineligible firms. Specifically, we found that monitoring of district staff needed to be improved and district staff needed better guidance, training, and criteria to follow the required annual review procedures for determining continued eligibility. We made six recommendations that individually and collectively could improve procedures used in assessing and monitoring the continuing eligibility of firms to participate in and benefit from the 8(a) program. SBA agreed with the six recommendations when the report was issued. As of July 2015, SBA had taken actions responsive to four of the recommendations. Specifically, it had assessed the workload of business development specialists, updated its 8(a) regulations to include more specificity on the criteria for the continuing eligibility reviews, developed a centralized process to collect and maintain data on 8(a) firms participating in the Mentor- Protégé Program, and implemented a standard process for documenting and analyzing complaint data. The two remaining recommendations yet to be fully implemented as of July 2015 focus on (1) procedures to ensure that appropriate actions are taken for firms subject to early graduation from the program and (2) taking actions against firms that fail to submit required documentation. In its fiscal year 2015 report on management challenges, the SBA OIG found that SBA had made progress towards addressing issues that hindered its ability to deliver an effective 8(a) business development program. For example, it found that SBA had expanded its ability to provide assistance to program participants through its resource partners. In addition, it noted that SBA had taken steps to ensure that business opportunity specialists assessed program participants’ business development needs during site visits. The OIG also found that SBA had revised its regulations, effective March 2011, to ensure that companies deemed “business successes” graduated from the program. However, for the second consecutive year the SBA OIG noted that SBA had not finished updating the SOP for the 8(a) business development program to reflect the March 2011 regulatory changes. In addition, the OIG continued to maintain that SBA’s standards for determining economic disadvantage were not justified or objective based on the absence of economic analysis. According to a senior SBA official, improving the 8(a) program is a priority for the new Administrator. For example, he stated that the agency was considering how to expand SBA One—an initiative designed to create a single application for most SBA loans and allow borrowers and lenders to populate forms from secure information storage—to include the 8(a) program. He noted that the goal would be to make it easier and less costly for small businesses to participate in the program. In addition, he stated that the agency was considering focusing its oversight on those 8(a) businesses that receive federal contracts. IT security. The SBA OIG has identified weaknesses in information systems security controls as a serious management challenge since fiscal year 2000. In its fiscal year 2015 management challenges report, it noted that SBA’s computer security program operates in a dynamic and highly decentralized environment and requires management attention and resources as weaknesses are identified. The OIG stated that SBA had shown progress in establishing an entity-wide incident management and response program and had improved network port security access controls, but found that SBA still needed to address long-standing security weaknesses identified in 35 open IT audit recommendations. In addition, the SBA OIG noted that SBA’s Office of the Chief Information Officer, in conjunction with SBA’s program offices, needed to implement tools and capabilities to provide effective oversight and continuously monitor computer security controls. According to a senior SBA official, the agency has recently placed an even greater emphasis on improving its IT security at the direction of the White House. He stated that one focus was increasing the use of personal identification verification cards, which are smart cards that govern access to federally controlled facilities and information systems. Loan agent fraud. A prospective borrower or a lender sometimes pays a loan agent (e.g., a loan broker or packager) to prepare documentation for an SBA loan application or find a lender. The SBA OIG has identified loan agent fraud as a serious management challenge since fiscal year 2000. Its fiscal year 2015 management challenges report stated that for years its investigations had revealed a pattern of fraud by loan packagers and other for-fee agents in the 7(a) loan program involving hundreds of millions of dollars. The report noted that SBA’s oversight of loan agents had been limited, putting taxpayer dollars at risk. It added that SBA could reduce this risk by developing a database or equivalent means to track loan agent activity, updating regulations on loan agent enforcement, issuing new guidance for lenders on not doing business with loan agents subject to enforcement actions, and implementing a loan agent registration system (including the issuance of a unique identifying number for each agent). Finally, the report noted that SBA had made substantial progress on tracking loan agency data, limited progress on updating its regulations and issuing new guidance to lenders, and had not yet started on its recent recommendation to implement a registration system. Human capital. The SBA OIG has included human capital as one of the most serious management challenges at SBA since fiscal year 2001, noting that SBA needs effective human capital strategies to carry out its mission successfully and become a high-performing organization. Problems that the SBA OIG cited in its management challenges reports over the years included the lack of a comprehensive human capital strategy that identified SBA’s current and future human capital needs, including workforce capacity and skill gaps; failure to clarify the role of or appropriately staff district offices when key program functions were transferred to service centers; and failure to adequately analyze priorities and allocate resources consistent with them. In our 2003 report on SBA’s management challenges, we also found that SBA needed to strengthen its human capital management. For example, we stated that SBA’s organizational structure had weaknesses that contributed to the challenges it faced in delivering services to the small business community. We discuss the status of SBA’s human capital management and additional challenges we identified as part of our current review later in this report. Lender oversight. SBA’s major loan programs (7(a) and 504) require effective lender oversight because the agency generally relies on the lenders that make 7(a) loans and certified development companies (CDC) that make 504 loans to process and service the loans and to ensure that borrowers meet the programs’ eligibility requirements. The SBA OIG has identified lender oversight as a serious management challenge for SBA since fiscal year 2001. We also identified SBA’s lender oversight as a challenge in our 2003 report on management challenges. More recently, we and the OIG found the following weaknesses in SBA’s lender oversight: In a July 2013 report, the SBA OIG found that SBA had not implemented procedures and policies to monitor risk across its loan portfolio and that SBA had not developed a process for ensuring that identified risks were addressed. The SBA OIG recommended that SBA implement a portfolio risk management system, use data from that system to support risk-based decisions in its loan programs, and implement additional controls to mitigate identified risks where necessary. SBA agreed with the recommendations. According to the SBA OIG, while SBA has implemented a portfolio risk-management program in accordance with its recommendations, it has not yet used data from the program to support risk-based decisions in loan programs or develop additional internal controls to manage identified risks. In a September 2013 report, we found that internal controls over lenders participating in the Patriot Express pilot program may not have provided the agency with reasonable assurance that loans were made only to eligible borrowers. For example, we found that SBA had not developed procedures for lenders to provide reasonable assurance that borrowers maintained their eligibility after the loans were disbursed. We recommended, among other things, that SBA enhance internal controls over borrower eligibility requirements. SBA subsequently decided to allow the program to expire in December 2013. In a March 2014 report, we found that although SBA had initiated actions to improve its reviews under the 504 loan program, its guidance for conducting risk-based reviews of the CDCs that make 504 loans did not require SBA staff to review supporting documentation on the number of jobs created or retained, a key requirement of the program. Among other things, we recommended that SBA require examiners to review such documentation. SBA generally agreed with our recommendation and subsequently revised its review procedures. Specifically, the agency incorporated steps in the worksheet examiners use to review loan files during risk-based reviews that require the review of documentation supporting the number of jobs created or retained. SBA has recently revised its procedures for conducting risk-based reviews of lenders and CDCs. A senior SBA official stated that he expected these changes to greatly improve SBA’s lender oversight. Specifically, SBA developed a new risk measurement methodology to assign risk ratings to both 7(a) lenders and CDCs and new lender risk-based review protocols for using these ratings to determine the scope of lender reviews. Those lenders with composite risk ratings above an established threshold are to undergo a targeted review of specific identified risks or a full review if risks are more pervasive. In its fiscal year 2015 report on management challenges, the SBA OIG noted that SBA also had improved its monitoring and verification of corrective actions by lenders by (1) developing corrective action assessment procedures, (2) finalizing a system to facilitate the corrective action process, and (3) populating the system with lender oversight results requiring corrective action. However, the OIG stated that in order for SBA to fully resolve this management challenge, the agency would need to demonstrate the effectiveness of the process for monitoring and verifying lenders’ implementation of corrective actions. Small business contracting. SBA’s contracting programs help eligible socially and economically disadvantaged small businesses obtain federal contracts on a set-aside basis. The agency has several such programs, including the HUBZone and WOSB contracting programs. The SBA OIG has identified small business contracting as a serious management challenge since fiscal year 2005. Its fiscal year 2015 management challenges report stated that SBA’s procurement flaws allowed large firms to obtain small business awards and agencies to count contracts performed by large firms toward their small business goals. In a September 2014 report, the OIG identified over $400 million in contract actions that were awarded to ineligible 8(a) and HUBZone firms, which may have contributed to the overstatement of small business goaling dollars reported to Congress in fiscal year 2013. In addition, we and the SBA OIG have identified internal control weaknesses in individual contracting programs, as the following examples illustrate. In a June 2008 report, we found that many firms were in the HUBZone program for more than 3 years without being recertified as required, resulting in potentially ineligible firms participating in the program. We recommended, among other things, that SBA eliminate the backlog using either SBA or contract staff and take the necessary steps to ensure that recertifications were completed in a more timely fashion in the future. SBA agreed, and in a 2009 testimony we found that the agency had eliminated the backlog of recertifications by hiring additional contract staff. However, we found in a February 2015 report that SBA did not replace the contract staff with FTE staff because, according to SBA, part of its funding authority was rescinded in 2013. As a result, we found that SBA once again had a backlog in recertifying firms. Therefore, we recommended that SBA assess the recertification process and implement additional controls, such as ensuring that sufficient staff were dedicated to the effort so that a significant backlog in recertifications did not recur. SBA agreed with the recommendation. According to SBA, it has since completed an evaluation of the current recertification procedures and plans to implement improved processes by September 30, 2015. In an October 2014 report on SBA’s WOSB program, we found that SBA performed minimal oversight of third-party certifiers and had not developed procedures to provide reasonable assurance that only eligible businesses obtained WOSB set-aside contracts. We recommended that SBA develop and implement comprehensive procedures to monitor and assess the performance of certifiers and enhance the examination of businesses that registered to participate in the WOSB program. SBA generally agreed with the recommendations and stated that the agency was in the process of implementing many of them. In June 2015, SBA officials told us that the agency was in the process of publishing a proposed rulemaking on how to proceed with certification in light of recent legislative changes to the program and planned to update its procedures for examining firms by the end of fiscal year 2015. In a May 2015 report, the SBA OIG found that federal agencies’ contracting officers awarded 15 of 34 set-aside awards without meeting the WOSB program’s set-aside requirements. These firms received approximately $7.1 million of fiscal year 2014 set- aside awards that may have been improper. It also found that of the 34 awards reviewed, only 25 had documentation of program eligibility in the WOSB program repository. Of those, 13 did not provide all of the required documentation, and 12 did not provide sufficient documentation to prove that the firm was controlled by women. According to a senior SBA official, SBA plans to improve small business contracting by extending SBA One to the 8(a) and HUBZone programs and creating a more dynamic small business search engine so that agencies can more readily identify small businesses that are eligible for contracts. Improper payments. The Improper Payments Information Act of 2002 requires agencies to review and identify those programs susceptible to significant improper payments, report on the amount and causes of improper payments, and develop plans for reducing improper payments. The SBA OIG has identified improper payments in SBA’s 7(a) and disaster loan programs as serious management challenges since fiscal year 2010 and fiscal year 2012, respectively. Its fiscal year 2015 report on management challenges stated that previous OIG audits had determined that reported improper payment rates for 7(a) loan approvals and purchases and disaster loans were significantly understated because SBA had not adequately reviewed loans, had used flawed sampling methodologies, and did not accurately project review findings. The OIG noted that SBA had taken actions to correct many of these deficiencies for the 7(a) program, including formalizing its improper payment sampling and its process for reviewing disputed cases and developing appropriate corrective action plans for the program. However, the OIG stated that SBA still needed to demonstrate that its process over disputed cases was ensuring adequate and timely resolution, that it was adhering to recovery time standards, and that corrective action plans for the 7(a) loan program were effective in reducing improper payments. The OIG also noted that SBA had implemented an improved corrective action plan for the disaster loan program that if properly implemented, should effectively reduce the improper payment rate in future years. In a February 2015 report on improper payments related to Disaster Relief Appropriations Act, 2013 funding, we found that SBA did not have policies and procedures for estimating improper payments for the Office of Entrepreneurial Development Grants program, one of its two programs that received funding under the act. We also found that SBA’s policies and procedures for its Disaster Assistance Loans program, the other program that received funding, did not cover many of the key requirements for estimating improper payments. For example, they did not define improper payments consistent with OMB guidance. SBA’s internal guidance defines an improper payment as a loan approval that does not meet the eligibility requirements in its SOP for the program. However, OMB guidance clarifies that improper payments can include certain payments to eligible recipients, such as payments that are for the incorrect amount and duplicate payments. Such types of improper payments were not captured or addressed in SBA’s policies and procedures. We recommended that SBA take eight actions to develop policies and procedures for the Office of Entrepreneurial Development Grants program and six actions to revise its Disaster Assistance Loans program policies and procedures for estimating improper payments. SBA did not explicitly concur with our recommendations but stated that it would address them by including a chapter on improper payments as it updates its SOP for internal controls. In June 2015, SBA officials told us that they planned to issue a policy notice containing additional guidance until the SOP could be updated. In a May 2015 report on SBA’s progress in complying with the Improper Payments Elimination and Recovery Act of 2010, the SBA OIG found that SBA continued to make progress in its efforts to prevent and reduce improper payments. For example, the reported improper payment rate for disaster assistance loan disbursements had decreased from 18.4 percent in fiscal year 2013 to 12 percent in fiscal year 2014, exceeding SBA’s goal of 15 percent. However, the OIG also found that SBA needed to make some improvements to effectively develop its improper payment controls and processes for Hurricane Sandy disaster relief grants and 7(a) loan guarantee purchases. Specifically, it found that the reported improper payment rate for Hurricane Sandy disaster relief grants of 3 percent might have been understated because reviewing personnel did not identify payment errors and related opportunities for correcting those errors. The OIG also noted that the reported improper payment rate of 1.33 percent for 7(a) loan guarantee purchases was slightly understated. The OIG made six recommendations to improve the effectiveness of improper payment controls over Hurricane Sandy technical assistance grants and Section 7(a) loan guarantee purchases. A senior SBA official noted that SBA’s improper payment rates were decreasing and that SBA One would help decrease them further because it included editing features designed to reduce the number of technical errors. Loan Management and Accounting System (and other aspects of IT management). SBA’s Loan Management and Accounting System (LMAS) is one in a series of attempts by SBA during the past several years to upgrade existing financial software and application modules and remove them from the mainframe environment. The SBA OIG has identified this project as a serious management challenge since fiscal year 2010 on the basis of reviews that we and it conducted, as the following examples show. In a 2012 report, we found inconsistencies in SBA’s application of IT management practices (including IT risk management) resulting in part from inadequate executive oversight of the LMAS modernization project. For example, we found that SBA had not fully prioritized risks related to one project or developed plans to mitigate them. Consequently, this modernization effort was severely delayed, and costs were well above initial estimates. We recommended that SBA ensure that appropriate IT management practices were applied to LMAS projects and that the executive bodies responsible for project oversight provide appropriate and ongoing reviews. SBA generally concurred with the recommendations and has taken some actions to address them as discussed below. In a series of reports in 2010, 2013, and 2014 on the LMAS project, the SBA OIG found that SBA did not follow federal regulations and internal guidance on IT acquisitions, leading to delays and weaknesses in project oversight. For example, in 2014 the OIG found that SBA did not identify a plan for full user acceptance testing according to the requirements outlined in its own SOPs for IT system development. The OIG also noted that SBA had made some progress but recommended that the agency adhere to internal guidelines in performing project oversight, approve and revise project baselines, and affirm the viability of project milestones. SBA agreed with the findings and recommendations and has taken actions to address them. SBA has recently completed its LMAS projects. In response to our 2012 recommendation, SBA told us that it had instituted changes to provide consistent project management, including appropriate oversight of requirements, the engagement of an independent contractor, and the establishment of risk management processes. As of March 2015, SBA reported that development activities for these projects had been successfully completed and all projects had entered the operations and maintenance phase. The SBA OIG has identified problems with other aspects of SBA’s IT management. In a February 2014 report, the OIG found that SBA had not followed federal regulations and guidance in the acquisition of the OneTrack system, a system that the Office of Government Contracting and Business Development planned to use to track 8(a) and HUBZone program participants. The SBA OIG reported that SBA had failed to perform market research or use a modular contracting strategy intended to reduce acquisition risks. As a result, SBA did not receive a system with the full capabilities originally designed. The OIG recommended, among other things, that SBA conduct a requirements analysis and cost assessment of the system and ensure that all appropriate provisions of internal guidance on IT system development were met prior to placing OneTrack into production. SBA concurred with the recommendations, but according to the SBA OIG, the agency did not deploy the OneTrack system. Instead, as noted previously, a senior SBA official told us that the agency planned to use SBA One for the 8(a) and HUBZone programs as well as its loan programs. He further noted that improving IT was a priority for the Administrator and that she was devoting additional resources to IT to meet deferred needs. Acquisition management. The SBA OIG has identified SBA’s acquisition management as a serious management challenge since fiscal year 2013. Although SBA had taken steps to improve its acquisition process—such as realigning its acquisition program within the organization, hiring new staff, and providing additional training to its acquisition personnel, the SBA OIG noted in its 2015 report on SBA’s management challenges that challenges remained. These challenges included (1) poorly defined requirements, (2) internal control deficiencies, (3) inadequate oversight of contractor performance, and (4) an incomplete acquisition SOP. We discuss SBA’s efforts to address these challenges and the results of a contractor review that found additional areas in need of improvement later in this report. Disaster loan processing. Both we and the OIG have reported that SBA needs to further strengthen planning for and controls over its disaster loan program to improve its ability to respond effectively to future disasters. SBA provides funding and assistance to individuals and businesses after disasters declared by either the President or SBA. SBA’s disaster loan program is the primary federal program for funding long-range recovery for nonfarm businesses that are victims of disasters and is the only form of SBA assistance not limited to small businesses. After the 2005 Gulf Coast hurricanes (Katrina, Rita, and Wilma), SBA faced an unprecedented demand for disaster loans, while also being confronted with a significant backlog of applications. As a result, hundreds of thousands of loans were not dispersed in a timely way. In June 2008, Congress enacted the Small Business Disaster Response and Loan Improvements Act of 2008 (2008 Act), which placed new requirements on SBA to help ensure that it is prepared to respond to catastrophic disasters. Since Hurricane Katrina, SBA has implemented reforms intended to improve disaster loan processing by increasing the capacity of the electronic loan processing platform and addressing requirements in the 2008 Act. However, continued attention on efforts to strengthen internal controls in the disaster loan program is needed, as the following examples illustrate. In a September 2014 report, we found that SBA did not adequately respond to the higher volume of physical business disaster loans and economic injury loans early in its response to Hurricane Sandy and as a result did not meet its timeliness goal for processing applications. We further found that the agency did not revise its disaster planning documents—the Disaster Preparedness and Recovery Plan and the Disaster Playbook—to reflect the effects that loan application volume and timing could have on staffing, resources, and forecasting models for future disasters. We concluded that without accounting for its recent experience in its planning documents, SBA may be unprepared for future disasters. We recommended that SBA revise its disaster planning documents. SBA generally agreed with the recommendation and provided us in June 2015 with an updated Disaster Playbook—one of its two key disaster planning documents—that includes explicit recognition of the effects that high volumes of loan applications early in the response period could have on staffing and loan processing. Also in our September 2014 report on disaster assistance, we found that SBA had not developed an implementation plan for addressing the 2008 Act’s requirements, as GAO recommended in 2009. This plan was to include, among other things, challenges the agency faces in implementing the 2008 Act’s requirements, including those to implement three new guaranteed disaster programs using private sector lenders. SBA had made a decision to focus attention first on implementing a pilot program for one of the requirements, the Immediate Disaster Assistance Program (IDAP). We found that SBA had not conducted a formal documented evaluation of lenders’ feedback or taken other actions needed to establish the basis for proposed changes to requirements for Congress to consider. In order to provide Congress with reliable information on challenges SBA has faced in implementing IDAP, we recommended that SBA (1) conduct a formal documented evaluation of lenders’ feedback that can inform SBA and Congress about statutory changes that may be necessary to encourage lenders’ participation in IDAP and (2) report to Congress on the challenges SBA has faced in implementing IDAP and on statutory changes that may be necessary to facilitate SBA’s implementation of the program. SBA generally agreed with the recommendations. While SBA has solicited some lender feedback, it has not adopted a plan for the steps the agency will take to implement IDAP (and by implication, the other two loan programs) or to reach a determination on whether IDAP or the other loan programs should be implemented. In a February 2015 report, the SBA OIG found that loan officers did not have guidance for performing the financial analysis to determine whether Hurricane Sandy business loan applicants had repayment ability. It noted that because there was no guidance, loan officers used inconsistent methodologies when evaluating Hurricane Sandy business loans for repayment ability. The OIG estimated that SBA approved at least 537 Hurricane Sandy disaster business loans, totaling at least $17.9 million, without sufficiently considering principals’ living expenses when determining repayment ability and that as a result, these loans were at a higher risk of default. As this discussion of SBA’s management challenges indicates, we and the SBA OIG have identified a number of internal control weaknesses that have contributed to programmatic challenges. We have made a number of related recommendations, many of which SBA has begun to address. SBA also has a process in place to help ensure that internal controls are in place for financial reporting, an effort that is overseen by a Senior Assessment Team. Among other things, this team is responsible for determining the scope of the assessment; monitoring the assessment to confirm that it is carried out in a thorough, effective, and timely manner; and analyzing the results of the assessment. SBA’s Office of Internal Controls within the Office of the Chief Financial Officer carries out each year’s assessment of internal controls over financial reporting. These efforts are guided by federal internal control standards, which we updated in September 2014. The new standards are effective beginning with fiscal year 2016. To prepare for the implementation of these new standards, the Office of Internal Controls presented the new guidelines at a Senior Assessment Team meeting in fiscal year 2013 and at fiscal year 2014 FMFIA training provided to senior managers. SBA officials said that the agency had also begun to update its SOP on internal control and plans additional revisions after OMB has updated its Circular A-123, which is expected to include guidance on implementing the new standards. SBA has not resolved many of these long-standing management challenges due to a lack of sustained priority attention over time. In a September 2008 report, we noted that frequent turnover of political leadership in the federal government, including at SBA, often made it difficult to sustain and inspire attention to needed changes. SBA has undergone turnover in its leadership positions (see fig. 5). Senior SBA leaders have not prioritized long-term organizational transformation in management challenge areas such as human capital and information technology. For example, since 2008 SBA has published three strategic plans, each signed by a different Administrator. The overview from the Administrator in SBA’s 2008-2013 strategic plan acknowledged some of the internal management challenges the agency faced and noted that the plan reflected SBA’s efforts to address them. However, the overviews from the Administrators in the two subsequent plans did not do so. Furthermore, in an April 2013 House committee hearing on SBA’s proposed fiscal year 2014 budget, the committee Chairman stated that SBA’s proposed budget focused on the agency’s priorities but ignored some long-standing management deficits. For example, the Chairman noted that SBA included initiatives in the budget to increase the availability of loans to small businesses, but reduced resources that would be devoted to LMAS. These examples raise questions about SBA’s sustained commitment to addressing management challenges that could keep it from effectively assisting small businesses. As well as examining long-standing management challenges, we reviewed SBA’s strategic plan for fiscal years 2014 to 2018 to determine whether it met federal requirements. We found that the plan met all the requirements for planning and all but one of the content requirements. It partly met the content requirement that it describe how program evaluations were used in developing the plan and that it include a schedule of evaluations planned for the next 4 years (see table 1). Strategic planning at federal agencies, including SBA, is subject to a variety of statutory requirements. First, in 1993 Congress passed GPRA, which established strategic planning, performance planning, and performance reporting as the components of a framework for agencies to communicate progress in achieving their missions. Next, GPRAMA made some important changes to existing requirements by placing a heightened emphasis on priority setting, cross-organizational collaboration to achieve shared goals, and the use and analysis of goals and measures to improve outcomes. GPRAMA enhanced agency-level planning and reporting requirements and required agencies to increase leadership involvement and accountability. OMB has published guidance for federal agencies on the implementation of GPRAMA, including guidance on strategic planning. The statutes and guidance describe the elements agencies must include in their strategic plans and the planning process they must follow when developing them. Planning process. SBA’s process for developing its fiscal years 2014-2018 strategic plan met all federal requirements, such as gathering input from stakeholders. For example, officials from SBA’s Office of Performance Management, which took the lead in developing the strategic plan, told us that they had met with the Associate Administrator and Deputy Associate Administrator from each program office as well as with senior staff to discuss the strategic objectives, the strategies the offices planned to use to achieve the objectives, and metrics for the objectives. This office also met with the Associate Administrator of the Office of Field Operations to obtain input and published a notice in the SBA Daily, a daily e-mail communication sent to all SBA employees, to inform SBA employees that the draft strategic plan had been posted on SBA’s website and to encourage them to review the plan and provide comments. Externally, SBA officials met with relevant authorizing and oversight committees early on in the planning process to discuss a high-level outline of the plan. According to SBA officials, these meetings allowed SBA to obtain some congressional input before it committed to specific strategic goals and objectives. SBA also consulted with OMB, which provided comments on SBA’s priority goals and the structure and framing of its strategic plan, requested input from numerous stakeholder groups, and sought public comments by posting a notice in the Federal Register and making a draft strategic plan publicly available on its website. Contents of strategic plan. As shown in table 1, SBA’s fiscal years 2014-2018 strategic plan met all but one of the content requirements, partly meeting the requirement on program evaluations. For example, the plan has a comprehensive mission statement that covers the agency’s major functions and includes outcome-oriented, long-term strategic goals and objectives that reflect the results SBA is trying to achieve. It also contains specific strategies to achieve the agency’s goals and objectives. In addition, SBA’s strategic plan includes a list of external parties that have evaluated SBA’s programs, describes SBA’s efforts to build its capacity to conduct more program evaluations, and discusses two evaluations it plans to conduct in 2015. But it does not describe how program evaluations were used in developing the strategic goals and objectives or include a schedule of evaluations planned for the full 4 years. According to OMB, program evaluation is among the most important analytical tools for agency decision making. It can help agency managers, such as those at SBA, determine how best to spend taxpayer dollars effectively and efficiently, identify appropriate goals, and address questions about the effectiveness of strategies. Per OMB guidance, a strategic plan should describe how information from program evaluations and research was used to develop the strategic plan, including in establishing or revising the agency’s strategic objectives and identifying evidence-based approaches to meeting objectives. The plan should also (1) describe efforts to support high-quality evaluations, (2) discuss efforts to increase capacity for conducting them and using their findings, and (3) include a schedule of evaluations planned for the next 4 years. SBA officials stated that SBA did not use program evaluations to develop or revise its strategic objectives because the agency had conducted a limited number of them. SBA officials also told us that the agency had limited financial resources available for independent program evaluations but was working on three and considering some in other areas. SBA officials said that it thought the information included in the plan would be sufficient to meet GPRAMA’s requirements on program evaluations. Because SBA has not routinely conducted program evaluations, we have questions about whether the agency will have program evaluations on which to rely when developing its next strategic plan. GPRAMA aims to ensure that agencies use performance information in decision making and holds them accountable for achieving results and improving government performance. OMB has also encouraged agencies to improve government effectiveness by increasing their use of evidence and rigorous program evaluation in making budget, management, and policy decisions. In addition, in a June 2013 report we concluded that evaluations helped in assessing program effectiveness or value, explaining program results, implementing changes to improve program management or performance, developing or revising performance goals, designing or supporting program reforms, and sharing what works with others. We and the SBA OIG have in the past found instances in which SBA did not evaluate the effectiveness of new or existing programs, and the agency has not yet fully addressed our recommendations in this area, as the following examples show. In an August 2010 report, the SBA OIG found that SBA had not assessed the Community Express pilot loan program’s effectiveness. The OIG recommended, among other things, that SBA not extend the program after its expiration in December 2010 but instead evaluate the need for it. Rather than evaluate the program, SBA ended it in April 2011 and replaced it with the new Small Loan Advantage and Community Advantage programs. SBA officials told us that while the agency annually publishes loan program measures that incorporate initiatives such as these, it had not evaluated the initiatives. In an August 2012 report, we found that SBA lacked program evaluations for 10 of the 19 entrepreneurial assistance programs we reviewed. We recommended that SBA conduct more program evaluations to better understand the reasons the programs were not meeting their performance goals and to determine the programs’ overall effectiveness. SBA did not say whether it agreed or disagreed with the recommendation but did take some steps to begin to address it. For example, SBA has begun meeting monthly with other agencies as part of an OMB-led interagency working group that shares best practices in program evaluation and has started pilot programs to test evaluation methods. However, we are unsure of SBA’s commitment to conducting more program evaluations because as of May 2015, the agency had not completed evaluations of any of the programs covered by our recommendation. Therefore, we continue to maintain that our recommendation has merit and should be fully implemented. In a September 2013 report, we found that SBA lacked an evaluation plan to assess the performance of the Patriot Express pilot loan program. As a result, we concluded that SBA was unable to determine if the program was achieving its intended goals. We recommended, among other things, that SBA design an evaluation plan for pilot programs prior to implementation and consider the results of such an evaluation before extending any pilot. SBA responded to the report by stating that the agency would consider our findings as it determined whether to extend the program. Subsequently, SBA did not conduct a performance evaluation of the program and instead decided to terminate the program by allowing it to expire in December 2013. SBA replaced the Patriot Express program with the Veterans Advantage initiative in 2014. Although SBA officials told us the agency tracks performance measures for its loan programs and keeps raw data for each of the loan types (including veterans fee relief), the agency had not evaluated the initiative as of May 2015 and had no plans to do so. In April 2014, the House Committee on Small Business held a hearing on initiatives that SBA had created. The hearing explored the committee’s concerns that SBA had requested funding for potentially duplicative new programs while lacking adequate performance metrics to measure their success or failure. While they did not specifically address the lack of performance metrics for all the programs discussed during the hearing, two SBA witnesses cited the agency’s legislative authority for the new initiatives. Our August 2012 report on entrepreneurial assistance raised similar concerns about overlap in programs and stressed the importance of program evaluations. A senior SBA official acknowledged the agency’s challenges in conducting program evaluations and stated that it was developing a more systematic approach to conducting them, including determining how to collect needed data. While he identified two ongoing studies, he did not provide detailed information on the systematic approach, an expected completion date, or whether the agency would prioritize additional resources to conduct evaluations. Without prioritizing resources to conduct more evaluations of its programs and incorporating the results into its strategic planning process, SBA lacks a critical source of information for ensuring the validity and effectiveness of its goals, objectives, and strategies. In addition, SBA lacks pertinent information that would help in determining the effectiveness of both new and existing programs. SBA needs better planning and oversight in several key management areas. We reviewed SBA’s management of its (1) human capital, (2) organizational structure, (3) enterprise risk, (4) acquisition, and (5) procedural guidance. We found that SBA continued to face long-standing human capital challenges and had not completed development of a workforce plan or training goals to help address them. We also found that SBA faced a skill gap resulting from a 2004 reorganization, and as of June 2015 SBA officials told us the agency had not completed an assessment of its organizational structure. Further, SBA initiated in 2009 efforts to implement enterprise risk management but had only recently begun assessing agency-wide risks and lacked adequate documentation of its progress and future plans. In the area of acquisition management, SBA hired a contractor to assess its acquisition operations and, as of May 2015, was in the process of finalizing its action plan in response to the contractor’s findings. Finally, we found that SBA had not updated many of its guidance documents that it had identified as outdated. As previously noted, SBA’s OIG has identified SBA’s need for effective human capital strategies—the programs, policies, and processes that agencies use to build and manage their workforces—as one of the most significant management challenges facing the agency since 2001. We have also identified challenges to SBA’s human capital management. For example, in our January 2003 report on SBA’s management challenges, we found that SBA needed to strengthen its human capital management by, among other things, getting properly trained people into the right places, identifying the knowledge and skills requirements of its employees, and providing professional development opportunities as needed. According to SBA documents, the agency faces programmatic, demographic, and budgetary challenges that have had an effect on its workforce. To begin to address these challenges, SBA requested Voluntary Early Retirement Authority and Voluntary Separation Incentive Payments (VERA/VSIP) programs for fiscal years 2012 and 2014. Agency officials said that the VERA/VSIP programs were intended to allow SBA to begin reshaping its workforce to meet its ongoing needs in light of its evolving mission. In its applications for these programs, SBA identified the following challenges it faced. Programmatic. SBA stated that its workforce faced an ongoing skill gap resulting from the 2004 centralization of its loan processing function. SBA noted that this organizational change resulted in a gap between the competency mix of employees who had been hired for one mission and the competency mix needed to accomplish a new mission. Specifically, after loan processing was moved from the district offices to loan processing centers, the district offices were given new responsibilities, including business development and outreach. These new responsibilities created a skill gap because employees who were originally required to have a financial background for loan processing were now required to have different skills, such as a marketing background and interpersonal skills. SBA stated that the skill gap was particularly pronounced among 885 employees in two job series—GS-1101 and GS-1102. These employees include business opportunity specialists, economic development specialists, and procurement staff. According to SBA, despite its efforts during the last several years to address this skills imbalance through training and the fiscal year 2012 VERA/VSIP, among other things, the competency gap remains. SBA also noted that the skill gap had been compounded by recent changes in job requirements and new initiatives that required new skill sets for its employees. Demographic. SBA has stated that it has a high number of employees who will retire or will become eligible to retire in the next 5 years. As of June 24, 2014, about 25 percent of SBA employees were eligible to retire, and 50 percent will be eligible in 2019. According to SBA, its aging workforce presents two issues. First, 43 percent of those fully eligible to retire work in two mission-critical job series—GS- 1101 and GS-1102—that have a significant competency gap, but SBA noted that using attrition to obtain a better competency mix in the GS- 1101 job series would be slow. SBA thought that offering VERA/VSIP programs would give it an opportunity to more quickly reshape its workforce to obtain the needed competencies and skill set. Second, SBA stated that the high number of retirement-eligible employees meant that the agency needed a pipeline of new leaders. Creating this pipeline could mean increasing the number of employees who were at the early stages of their careers. Budgetary. SBA has also stated that current economic challenges mean that the agency cannot afford to retain staff with skills that do not support its mission. SBA noted that the fiscal years 2012 and 2014 VERA/VSIP programs kept the agency from having to implement a reduction in force in order to meet budgetary constraints, which would have exacerbated the skills imbalance. Specifically, SBA stated that under reduction-in-force procedures, the agency would lose the types of employees it had recently been recruiting—those with the needed mix of competencies to better ensure SBA’s mission success. SBA has recently developed goals and objectives for its strategic human capital plan and has developed an accountability policy, steps that should help improve its human capital management. It also obtained authority for the two VERA/VSIP programs that it believed would help reshape its workforce. Strategic human capital plan. OPM requires agencies to have documented evidence of a current agency human capital plan that includes human capital goals, objectives, and performance measures. Additionally, in our 2003 report on strategic workforce planning we concluded that periodic measurement of an agency’s progress toward human capital goals provided information for effective oversight by identifying performance shortfalls and appropriate corrective actions. SBA’s fiscal years 2013-2016 strategic human capital plan incorporates these practices. For example, the plan identifies human capital goals and objectives, such as building strategic partnerships and incorporating human capital flexibilities, and is designed to support SBA’s agency-wide strategic plan, particularly SBA’s strategic objective to invest in its employees. The plan also includes action items and performance measures that SBA tracks annually and demographic information about SBA’s workforce. Human capital accountability policy. OPM requires agencies to have documented evidence of a human capital accountability system that provides for an annual assessment of agency human capital management progress and results. SBA has developed a human capital accountability policy that outlines SBA’s processes for evaluating its human capital systems and a multiyear schedule for conducting human resource program assessments. VERA/VSIP programs. As discussed previously, SBA requested VERA/VSIP authority for fiscal years 2012 and 2014 in hopes that these initiatives would enable the agency to begin reshaping its workforce to meet its ongoing needs. In both years, OPM authorized SBA to offer this authority to 300 employees. SBA has taken several other steps to begin addressing its human capital management challenges, including working toward a workforce plan and identifying mission-critical competencies. According to federal internal control standards, workforce planning is a key internal control that allows agency management to ensure that skill needs are continually assessed and that the organization is able to obtain and maintain a workforce with the skills necessary to achieve organizational goals. Although agencies may take various approaches to workforce planning, in a December 2003 report we identified key principles they should address. SBA has taken some recent actions to incorporate these principles but has not completed a formal workforce plan that fully incorporates them (see table 2). Workforce plan. SBA indicated in its fiscal years 2013-2016 strategic human capital plan that it was working on a separate workforce plan. SBA officials said that they had taken steps to develop the plan but as of May 2015 had not completed it. SBA officials told us that they had been unable to complete the workforce plan because a current agency-wide competency and skill gap assessment was necessary to develop it. As discussed in more detail later, they have not yet completed this assessment because of a delay in deploying the system needed to conduct it. As stated earlier, SBA has faced a long- standing skills imbalance resulting from organizational changes dating to 2004. Critical skills and competencies. SBA has taken steps to identify competencies for its mission-critical occupations. For example, in 2011 SBA conducted competency assessments for its human resources staff and its managers and supervisors. In addition, in fiscal year 2013 SBA’s Office of Human Resources Solutions reviewed 31 of SBA’s mission-critical position descriptions, developed competency lists, and requested that program offices review and make adjustments to those lists. SBA’s Office of Disaster Assistance has also taken steps to identify competencies for its employees by developing a baseline competency framework based on OPM- recommended competencies, current position descriptions, performance goals, and organizational strategic goals. However, SBA has not completed an agency-wide competency and skill gap assessment since 2006, and an up-to-date assessment is critical for determining whether there are additional skill gaps in its current workforce. In 2012, SBA began using an electronic system called the Talent Management Center to manage employee training and performance. The system consists of two components—a learning management module and a performance management module. The learning management module is a software application that provides SBA employees with access to online training courses and allows employees to track their training. The performance management module allows employees to track their performance goals and evaluations. According to SBA officials, the contractor that implemented the Talent Management Center was also going to implement a tool as part of the learning management module that would allow SBA to conduct a competency and skill gap assessment. However, SBA was unable to conduct the assessment because the contractor did not deploy the tool on time as planned. SBA officials told us the agency had contracted with another vendor to conduct the assessment during fiscal year 2015 and had held several meetings with the vendor to discuss the methodology and process for the assessment as of June 2015. Gap-closure strategies. As discussed earlier, SBA has taken initial steps aimed at addressing the skills imbalance in its workforce that resulted from its 2004 organizational change. SBA’s VERA/VSIP programs in fiscal years 2012 and 2014 were intended to provide the agency needed flexibility to address this skills imbalance by creating vacancies that would allow it to strategically recruit new employees who have the needed competencies and skills. Under the VERA/VSIP programs, a total of 327 employees left the agency, but this number was lower than SBA expected. In addition, SBA has developed a Leadership Succession Plan. The purpose of the plan is to strengthen current and future agency leadership capacity by creating leadership readiness programs and adopting a succession planning model to develop pools of potential leaders. The plan also outlines other succession strategies, such as job rotations to broaden employees’ understanding across different functional areas of the agency and a mentoring program to help employees clarify career goals and analyze strengths and developmental needs. In addition, a senior SBA official stated that improving human capital management was a priority for the Administrator and that SBA was focused on developing different ways to recruit a younger and more diverse workforce. For example, SBA has revised its Presidential Management Fellows program with the goal of improving retention and is working with the Peace Corps to identify returning volunteers who may be interested in a career in public service. Despite these steps, SBA does not have a current agency-wide competency and skill gap assessment and as a result cannot develop and document an effective long-term strategy to fully address its previously identified skill gaps and any additional skill gaps that may exist. SBA officials told us they had not developed a long-term plan because they were relying in part on their VERA/VSIP programs to help reshape SBA’s workforce to address its long-standing skills imbalance. SBA developed guidance outlining how vacancies were to be filled after the fiscal year 2014 VERA/VSIP program, and SBA officials stated that options for restructuring and related hiring following this program were still being considered as of May 2015. However, because both the fiscal year 2012 and fiscal year 2014 VERA/VSIP programs resulted in a smaller-than-expected number of retirees, whether these efforts will ultimately allow SBA to reshape its workforce to achieve its needed skill mix is unclear. Support capacity. Although SBA has policies in place that enable the use of human capital flexibilities to support its workforce, such as a recruitment and retention incentives policy, the use of these flexibilities is not directly tied to a workforce plan. Evaluation. SBA has taken steps to monitor and evaluate its progress toward its human capital goals through its human capital accountability policy and tracking progress of the measures in its strategic human capital plan. However, because SBA has not established a strategic workforce plan, the agency has not monitored and evaluated the results of its workforce planning efforts. Without a workforce plan that fully addresses key principles, including a current agency-wide competency and skill gap assessment and a long- term strategy to close skill gaps, SBA cannot provide reasonable assurance that its workforce has the skills needed to meet the agency’s mission. For example, having a current assessment and completed workforce plan prior to its early retirement programs would have helped SBA target its hiring and retention efforts. Without having first taken these steps, SBA risked compromising its efforts to reshape the agency. In a 2004 report, we concluded that effective training and development programs are an integral part of a learning environment that can enhance an agency’s ability to attract and retain employees with the skills and competencies needed to achieve results. We also noted that training and development programs help an agency achieve its mission and meet its goals by improving individual and ultimately organizational performance. In the same 2004 report, we identified key principles that could help federal agencies produce a strategic approach to their training and development efforts. SBA has taken steps to incorporate these principles but has only partly incorporated them (see table 3). Planning. SBA officials told us that SBA had conducted a training needs assessment in the summer of 2014, which helped the agency identify a list of top training courses for its employees. SBA also developed a fiscal years 2014-2015 training plan that outlines SBA’s major training programs and activities. However, the plan does not fully establish a strategic approach to training that would help achieve agency results. First, it does not establish training goals and related performance measures to help SBA determine whether its training and development programs are achieving their intended results. SBA officials told us that they had not developed these goals and measures because the employees developing the plan had left the agency, but that they planned to develop them for the next iteration of the training plan. However, as of June 2015, SBA did not have an expected completion date for the revised plan. Second, as previously discussed, SBA has not conducted a competency and skill gap assessment since 2006. Third, SBA officials told us that the training plan incorporated input from supervisors but did not directly incorporate employee development goals because the agency was not required to have individual development plans for its staff. However, while SBA is not required to have individual development plans, it could choose to require them or to obtain employee development goals through other means. Design and development. SBA has also taken steps to identify specific training and development initiatives. For example, as discussed earlier SBA recently began using an electronic system called the Talent Management Center which, among other things, allows employees to take certain training courses online. SBA officials stated that SBA is currently developing the curriculum for its online courses in consultation with program office management, supervisors, and hiring managers. SBA officials told us that they decided to use this system due to its cost-effectiveness and flexibility. The training plan also identifies a number of other training initiatives, such as a leadership development program. However, whether these training and development initiatives are directly connected to improving individual and agency performance is unclear. For example, although SBA launched an electronic learning module and is developing a curriculum for it, the lack of a completed competency assessment makes it difficult to specifically design the curriculum to improve individual performance. Furthermore, while SBA offers a mix of centralized and decentralized training programs, a recent training assessment it conducted found that the decentralized training that program offices provide receives no review or oversight to detect duplicative offerings or identify opportunities to provide training more effectively and efficiently. According to SBA officials, its new electronic learning module will enable the agency to track and monitor both decentralized and centralized training. In addition, a senior SBA official stated that SBA was considering ways to offer more systematic training and mentorship programs. Implementation. SBA officials told us that they had taken steps to communicate information about training efforts to employees by, for example, publishing notices about upcoming training opportunities in the SBA Daily. SBA officials also told us that they had provided employees with training on how to use SBA’s new electronic learning module and training for each staff member on their positions as part of their professional development. However, whether SBA has taken actions to foster an environment conducive to effective training and development is unclear. For example, results from SBA’s 2014 Federal Employee Viewpoint Survey (FEVS) showed that just over one-third of employees felt that their training needs had been assessed. Specifically, in response to a question asking whether their training needs were assessed, 39.22 percent (537 employees) agreed or strongly agreed, 24.82 percent (341 employees) neither agreed nor disagreed, and 35.96 percent (489 employees) disagreed or strongly disagreed. In addition, about 40 percent indicated that they were satisfied with the training they had received. Specifically, in response to a question asking whether they were satisfied with the training they had received for their present job, 39.93 percent (533 employees) were satisfied or very satisfied, 25.48 percent (343 employees) were neither satisfied nor dissatisfied, and 34.59 percent (457 employees) were dissatisfied or very dissatisfied. The SBA district office employees we interviewed also expressed mixed views about the training provided by SBA. For example, three employees stated that they received helpful training related to their positions. However, 15 employees described difficulties obtaining the training they needed. For example, 9 of these employees—including 4 lender relations specialists, 2 economic development specialists, and 1 business opportunity specialist—stated that they did not receive any formal training related to their positions. Two employees stated that they had multiple job responsibilities but did not receive the training needed to perform them all. The other 4 employees stated that the training they did receive was not helpful or relevant to meeting their job responsibilities. Evaluation. In fiscal year 2012, SBA’s Office of Human Resources Solutions established an accountability function in its Strategy, Policy, and Accountability Division with responsibility for conducting internal assessments. The program assessment schedule calls for reviewing SBA’s training programs annually, and SBA conducted its first assessment of the centralized training program under this schedule in 2013. SBA found significant weaknesses and areas of noncompliance with regulatory requirements, including not having evaluated its training programs on a regular basis, not maintaining records of training and expenditures, and not addressing federally mandated training requirements in written policies. The assessment contained 14 required actions designed to strengthen SBA’s training and development programs and ensure regulatory compliance. In April 2015, SBA completed its fiscal year 2014 assessment of its training program to determine the agency’s progress on correcting these deficiencies and found that 10 of the 14 required actions remained incomplete. SBA stated that it planned to assess its training program again at the end of fiscal year 2015. Without a more strategic approach to its training and development programs, including incorporating training goals and measures and input on employee development goals in its training plan, it will be difficult for SBA to effectively establish priorities in its training initiatives or address skill gaps. In our 2003 report on results-oriented cultures, we identified key principles in employee performance management. We concluded that an effective employee performance management system can be a strategic tool to drive internal change and achieve desired results. Specifically, we concluded that employee performance management systems must show how team, unit, and individual performance can contribute to overall organizational results and that the system serves as the basis for setting employee expectations and for evaluating individual performance. In 2011, SBA updated critical elements and performance standards for its employees in the field, and in 2012 began using a new electronic system to manage performance (the Talent Management Center). But it did not update its March 15, 2000, SOP before the new system and standards were implemented. According to a recent SBA assessment of its performance appraisal program, the existing SOP does not reflect the agency’s current practices. SBA officials stated that as of August 2015 they had revised the SOP, it had been signed by the Administrator, and it was in the process of being published. Prior to finalizing its SOP, SBA officials stated that the agency had provided employees with guidance on using the new electronic system and on new performance standards for the field. Our review of a set of critical elements and performance standards that we received from SBA indicated that field office employees are primarily evaluated on the basis of quantitative measures. For example, business opportunity specialists are evaluated in part on their participation in outreach events and procurement visits to assigned entities. In order to receive the highest rating of five for these activities, these specialists must conduct or participate in more than 65 events annually. According to the elements and standards, supervisors are responsible for monitoring the quality of employee activities. The elements and standards do not describe what criteria supervisors should apply to make that determination, but SBA officials stated that SBA provides program offices with guidance on incorporating qualitative measures. Some of the 58 SBA managers and employees we spoke with in SBA’s regional and district offices expressed mixed views about SBA’s new employee performance management system. For example, 11 (10 managers and 1 nonmanager) said the system clearly laid out employee performance expectations. However, some managers and employees criticized certain aspects of SBA’s employee performance management system. Specifically, 5 (2 managers and 3 nonmanagers) said the performance appraisal elements and standards focused primarily on quantitative measures and did not account for quality. As discussed earlier, our review of a set of critical elements and performance standards also indicated a lack of qualitative measures. In addition, 6 (3 managers and 3 nonmanagers) stated that there were problems with technical aspects of the electronic performance system. For example, 4 (1 manager and 3 nonmanagers) said that they had to track their performance activities in separate systems that did not communicate with one another. Employees responding to SBA’s 2014 FEVS also expressed mixed views about SBA’s employee performance management system. For example, in response to a question asking whether SBA employees understood what they had to do to be rated at different performance levels, 71.87 percent (986 employees) agreed or strongly agreed, 11.20 percent (158 employees) neither agreed nor disagreed, and 16.92 percent (229 employees) either disagreed or strongly disagreed. In response to another question asking employees whether they believed their performance appraisal was a fair reflection of their performance, 68.06 percent (938 employees) agreed or strongly agreed, 13.03 percent (180 employees) neither agreed nor disagreed, and 18.91 percent (254 employees) either disagreed or strongly disagreed. However, in response to a question asking whether differences in performance were recognized in a meaningful way, 36.09 percent (472 employees) agreed or strongly agreed, 24.90 percent (324 employees) neither agreed nor disagreed, and 39.01 percent (509 employees) disagreed or strongly disagreed. SBA officials told us that a committee of district directors was working with the Office of Human Resources Solutions to bring qualitative components back into the performance appraisal elements and standards for the field offices and resolving technical problems with the system. SBA officials stated that these efforts were ongoing, but the agency does not have an expected completion date. Despite long-standing organizational challenges affecting program oversight and human capital management, SBA officials told us that as of June 2015, SBA had not completed an assessment of its structure or made any needed changes to determine how to address the challenges. In a January 2003 report on SBA’s management challenges, we found that the agency’s organizational structure created complex overlapping relationships among offices that contributed to challenges in delivering services to small businesses. In 2004, SBA centralized its loan functions by moving responsibilities from district offices to loan processing centers. However, some of the complex overlapping relationships we identified in 2003 still exist (see fig. 6). Specifically, SBA’s organizational structure often results in working relationships between headquarters and field offices that differ from reporting relationships, potentially posing programmatic challenges. District officials work with program offices at SBA’s headquarters to implement the agency’s programs, but these officials report to regional administrators, who themselves report to the Office of Field Operations. For example, the lender relations specialists in the district offices work with the Office of Capital Access at SBA headquarters to deliver programs but report to district office management. Similarly, the business opportunity specialists in the district offices work with the Office of Government Contracting and Business Development at SBA headquarters to assist small businesses with securing government contracts but report to district office management. Further, some officials have the same duties. The public affairs specialists at the district offices and the regional communications directors both handle media relations. In addition, district directors and regional administrators both are to conduct outreach to maintain partnerships with small business stakeholders such as chambers of commerce; lending institutions; economic development organizations; and federal, state, regional, and local governments. They also participate in media activities and speak at public events. In later reports, we and others—including SBA itself—identified organizational challenges that affected SBA’s program oversight and human capital management. In a March 2010 report on the 8(a) business development program, we found a breakdown in communication between SBA district offices and headquarters (due in part to the agency’s organizational structure) that resulted in inconsistencies in the way district offices delivered the program. For example, in about half of the 8(a) files we reviewed, we found that district staff did not follow the required annual review procedures for determining continued eligibility for the program. This was due in part to the lack of clear guidance from headquarters. In addition, we found that confusion over roles and responsibilities led to district staff being unaware of the types and frequency of complaints across the agency on the eligibility of firms participating in the 8(a) program. As a result, district staff lacked information that could be used to help identify issues relating to program integrity. As discussed earlier, we made six recommendations, including that SBA provide more guidance to help ensure staff more consistently follow procedures, and SBA agreed with them. As of July 2015, SBA had taken actions responsive to four of the recommendations. In addition, in 2013 the SBA OIG found that communication from a headquarters program office to field offices about conducting examinations for a specific program had been limited. The report noted that this lack of communication could have not only inhibited the sharing of crucial information but also caused inconsistencies in the examinations across field offices. It concluded that these weaknesses in the examination process had diminished the agency’s ability to identify regulator violations and other noncompliance issues in the operation of the program. The OIG recommended that SBA create and execute a plan to improve the internal operations of the examination function, including a plan for better communication. Although SBA disagreed with the recommendation, the agency issued examination guidelines that the OIG in 2015 deemed satisfactory to close the recommendation. In documentation requesting fiscal years 2012 and 2014 VERA/VISP programs, SBA said that long-standing skill gaps, primarily in field offices, which had resulted from the 2004 reorganization and centralization of the loan processing function, still existed. SBA determined that its organizational changes had resulted in a programmatic challenge because employees hired for a former mission did not have the skills to meet the new mission. Specifically, before the centralization effort field offices had primarily needed staff with a financial background to process individual loans. But the new mission required staff who could develop socially and economically disadvantaged businesses and conduct annual financial reviews of them, engage with lenders, and conduct outreach to small businesses. Despite the organizational and managerial challenges it has faced, SBA’s changes to its organizational structure since fiscal year 2005 have been incremental (piecemeal), as the following examples illustrate: In 2007, SBA reorganized five program offices and four administrative support functions in order to clearly delineate reporting levels, among other things. The agency also eliminated the Chief Operating Officer as a separate office and integrated its functions into the Office of the Administrator. In 2008, the Office of Equal Employment Opportunity and Civil Rights Compliance began reporting directly to the Associate Administrator for Management and Administration to facilitate better oversight, planning, coordination, and budgeting for all of the agency’s administrative management operations. In 2010, SBA consolidated financial management by moving its procurement function to the Office of the Chief Financial Officer and transferring day-to-day procurement operations from headquarters to the agency’s Denver Finance Center. This change was intended to improve the efficiency and effectiveness of SBA’s acquisition programs. In 2011, SBA restructured the Office of Human Capital Management in response to significant turnover that had a serious effect on the level and scope of services. The reorganization streamlined the office, which was renamed the Office of Human Resources Solutions, by reducing the number of branches and divisions. In 2012, new offices were created in the Office of Capital Access to respond to, among other things, growth in small business lending programs and increased servicing and oversight responsibilities following the 2007-2009 financial crisis. The changes sought to help the agency become a better partner with lending institutions and nonprofit financial organizations to increase access to capital for small businesses. In 2012, SBA established a headquarters unit within the Office of Government Contracting and Business Development and made it responsible for processing the continued eligibility portion of the annual review required for participants in the 8(a) program. Prior to this change, district officials, who are also responsible for providing business development assistance to 8(a) firms, were tasked with conducting exams of continued eligibility. While district officials have continued to perform other components of the annual review, shifting the responsibility for processing continued eligibility to headquarters was designed to eliminate the conflict of interest for district officials associated with performing both assistance and oversight roles. In 2012, the Office of Field Operations revamped field office operations following a 2010 review of all position descriptions to ensure that they aligned with SBA’s strategic plan and district office strategic plans. Many position descriptions were rewritten, although there were no changes in grade or series. Before the review, district offices had two principal program delivery positions—lender relations specialist and business development specialist. As a result of the review, descriptions for both positions were rewritten, and the business development specialist position became two—economic development specialist and business opportunity specialist. The skills and competencies for the new position descriptions focused on the change in the district offices’ function from loan processing to compliance and community outreach in an effort to address skill gaps. As a result, staff were retrained for the rewritten positions. In 2013, SBA reestablished the Office of the Chief Operating Officer (formerly the Office of Management and Administration) to improve operating efficiency. Among other things, this change transferred Office of Management and Administration staff to the reestablished office, along with the Office of the Chief Information Officer and the Office of Disaster Planning, which saw its mission expanded to include enterprise risk management. While SBA has made incremental changes, SBA officials told us that as of June 2015, the agency had not completed an evaluation of its organizational structure and made changes as necessary in response to changing conditions. According to federal internal control standards, organizational structure affects the agency’s control environment by providing management’s framework for planning, directing, and controlling operations to achieve agency objectives. A good internal control environment requires that the agency’s organizational structure clearly define key areas of authority and responsibility and establish appropriate lines of reporting. Further, internal control guidance suggests that management periodically evaluate the organizational structure and make changes as necessary in response to changing conditions. Since its last major reorganization in 2004, SBA has seen significant changes, including decreases in budget and an increase in the number of employees eligible to retire. In 2012, the agency committed to assessing and revising its organizational structure to meet current and future SBA mission objectives. However, the contractor that SBA hired to assess its organizational structure did not begin its assessment until November 2014. SBA officials told us that the effort was delayed because in February 2013 SBA’s Administrator announced she was leaving the agency and the position was vacant from August 2013 until April 2014. In August 2015, SBA told us that after the new administrator reviewed business delivery models and became acclimated to the agency, the agency procured a contractor and work began on the organizational assessment in November 2014. According to the statement of work, the contractor was to assist the chief human capital officer by making recommendations regarding an agency-wide realignment to improve service delivery models, modernize systems and processes, and realign personnel, among other things. During the course of our review, SBA officials told us the contractor completed its assessment in March 2015, but SBA had not finished analyzing the results and determining what organizational changes, if any, to make. In its August 2015 comments on a draft of this report, SBA noted that the agency had recently completed that review and determined that major restructuring was not warranted at the time. However, SBA did not provide us with any documentation that shows when the assessment was completed or that supports its conclusions that major changes were not warranted. Until SBA documents its assessment, it will not have an institutional record of its actions. Instead of conducting its planned assessment and subsequent reorganization when initially scheduled, SBA used two VERA/VSIP programs to attempt to address workforce challenges resulting from the 2004 reorganization. SBA’s plans in the aftermath of the fiscal year 2014 VERA/VSIP program include restructuring. Specifically, an October 2014 guidance memorandum on staffing the agency-wide vacancies after the fiscal year 2014 VERA/VSIP stated that an Administrator’s Executive Steering Committee for SBA’s Restructuring would make decisions about restructuring. The memorandum also stated that the Chief Human Capital Officer had been tasked with identifying vacant FTEs for new positions that would support any new functions or initiatives envisioned by the Administrator’s restructuring efforts. For example, the memorandum noted that 82 of the 147 new vacancies from the VERA/VISP would be used to support the Administrator’s restructuring. The memorandum added that the remaining 65 vacancies would remain in their respective program offices and that the position descriptions would be modified or positions relocated to meet internal needs. According to SBA, options for restructuring and related hiring were still being considered as of May 2015. Although SBA told us that it has recently completed an assessment of its organizational structure, it had not documented this effort as of August 2015. Until it documents its efforts to examine its structure and any findings, it will be difficult for SBA to provide reasonable assurance, or for a third party to validate, that SBA’s current organizational structure is contributing effectively to its mission objectives and programmatic goals. Given the range of programs SBA manages and oversees, having a robust enterprise risk management system is critical to effectively managing risks. SBA initiated efforts to implement enterprise risk management in 2009, noting the importance of managing the range of cross-agency risks it faces. However, it could not provide us with adequate documentation on the progress of these efforts or on any future plans and had only recently begun assessing agency-wide risks. SBA began its enterprise risk management efforts in 2009 with the designation of an unofficial chief risk officer but considers itself to be in the “early stages” of implementation. In 2013, SBA established an Office of Enterprise Risk Management under the chief operating officer and developed a process to guide its approach. SBA officials told us that they had developed this process based in part on recommendations made by Deloitte & Touche as part of that organization’s review of the risk management practices within SBA’s Office of Capital Access. In the course of our work, SBA provided a graphic depicting the five phases of its enterprise risk management process: (1) identify risk; (2) assess risk; (3) strategize response; (4) implement; and (5) monitor and report (see fig. 7). The agency also provided a brief summary of the progress it had made in implementing these phases. However, the agency could not elaborate on and did not provide any other documentation of its process, including on the goals it hoped to achieve or the specific actions it planned to take during each phase. Federal internal control standards require that significant actions be clearly documented in, for example, management directives, administrative policies, or operating manuals. Although SBA’s enterprise risk management plans and efforts to date are not fully documented, we used available information to compare SBA’s process to our risk management framework (see table 4). GAO’s risk management framework calls for the following five phases and lays out key elements for each: (1) defining strategic goals, objectives, and constraints: (2) assessing risk; (3) evaluating alternatives; (4) selecting responses; and (5) implementing and monitoring. As table 4 shows, SBA has partially implemented two phases of its risk management process and those phases partially align with one phase of our framework. Specifically, SBA has begun to identify and assess risks, which partially aligns with our risk assessment phase. However, we rated SBA as not following the other four phases of our framework, as it had not yet made progress on implementing the remaining phases of its process. Strategic goals, objectives, and constraints. Our risk assessment framework calls for fully documenting strategic goals and objectives, which should be clearly articulated and measurable, as well as any limitations or constraints that may limit effective risk management. In October 2014, the SBA Administrator approved the formation of an Enterprise Risk Management Board to ensure that the greatest risks to the agency are regularly identified, assessed, and monitored. The Administrator approved the membership of the board on April 24, 2015. According to officials, the board plans to develop a charter by September 2015. However, because the board met for the first time on April 30, 2015, SBA had not yet documented the strategic goals and objectives it is attempting to achieve, the steps needed to attain these results, or the constraints under which the agency operates, as of May 2015. Risk assessment. Our framework calls for identifying potential events that can adversely affect the agency and evaluating them based on the likelihood of occurrence and impact. SBA officials told us that as of November 2014, officials in SBA’s risk management office had interviewed program office leaders and reviewed agency processes to draft an inventory of risks and complete an initial assessment. The officials explained that they needed to further refine this assessment and that the Enterprise Risk Management Board would determine additional steps. However, because the board was still reviewing the draft risks as of May 2015, SBA has not completed the risk inventory or developed procedures to identify and evaluate the potential risks to the agency’s ability to achieve its goals and objectives. Alternatives evaluation. Our framework calls for identifying alternative ways the agency can prevent or manage an identified risk while taking into consideration the costs and benefits of the alternatives. However, to date, SBA has not been able to consider managing risks because it has not completed the risk assessment process. According to SBA, risk responses will be guided by the Management selection. Our framework requires management to select and document responses to potential risks and provide a rationale for its selections. To date, SBA has not proceeded to this step. As noted earlier, SBA told us that the Enterprise Risk Management Board would guide risk responses. Implementation and monitoring. Our framework includes implementing management’s selected alternatives to address risks and periodically assessing the efficiency and effectiveness of the entire risk management program. Because it has not identified risks and possible responses to them, SBA cannot proceed to this step. According to SBA officials, the Office of Risk Management will maintain records of the Enterprise Risk Management Board’s decisions and follow up with the chief risk officer to ensure that the steps the board decides on are implemented. According to a senior SBA official, the Enterprise Risk Management Board will be assessing SBA’s risks in the near future, and he plans to ask the Board to consider our risk management framework at that time. Given the long-standing management challenges related to specific SBA programs discussed earlier, it may be challenging for SBA to establish an agency-wide system. However, until SBA identifies and fully documents the steps that it plans to take to implement its enterprise risk management process and incorporates the elements of our risk management framework, it will not be able to provide reasonable assurance that its enterprise risk management efforts effectively identify, assess, and manage risks before they can adversely affect SBA’s ability to achieve its mission. SBA has met its small business contracting goals and taken steps to address acquisition management challenges. Contracting represents a small portion of SBA’s obligations. SBA’s total contract obligations for fiscal year 2014 were approximately $116.6 million, compared with total obligations for fiscal year 2014 of about $845.8 million. But the agency in recent years has exceeded its primary goals for awarding contracts to small businesses. By statute, federal agencies are to award 23 percent of their prime contract funds to small businesses. To help meet this government-wide goal, each agency has its own individual goal. In fiscal year 2013, of approximately $106.7 million in total small business-eligible contracting dollars, SBA awarded $76.8 million to small businesses, exceeding the agency’s goal of 67 percent (see table 5). Despite these successes, over the last several years the SBA OIG has identified deficiencies in several areas of SBA’s acquisition management, including lack of compliance with laws and regulations, such as the Improper Payments Information Act requirements for planning, execution, and reporting of improper payments for its contracting activities; inadequate application of funding principles, including obligating funds by issuing contract modifications without identifying specific requirements for IT hardware and software; and turnover in key contracting staff, resulting in a workforce insufficient to effectively award, administer, and oversee contracts. SBA has taken steps to address some of these deficiencies. In October 2010, SBA realigned its acquisition program by transferring its procurement function and operations to the Office of the Chief Financial Officer’s Denver Finance Center and rebranding it the Acquisition Division. This redesign was intended to improve the efficiency and effectiveness of the acquisition program by integrating acquisition and procurement activities. Since this realignment, SBA has taken further steps to improve the acquisition process, including hiring new staff and providing training to its acquisition personnel. Specifically, according to its fiscal year 2014 Acquisition Human Capital Plan, in fiscal year 2011 SBA hired 17 new employees in the acquisition office. Each employee was given a plan with measureable outcomes and over 50 hours of individual job-specific training. In addition, in fiscal year 2011 SBA reinstituted its Contract Review Board, giving greater oversight to high-risk contracts. Members of the board include the chief acquisition officer, senior procurement executive, and director of procurement law. However, in its most recent report on SBA’s management challenges, the SBA OIG noted that while SBA had made some progress in its acquisition program, the program continued to face challenges, including (1) poorly defined requirements, (2) internal control deficiencies, (3) inadequate oversight of contractor performance, and (4) an incomplete acquisition SOP. Specifically, the fiscal year 2015 report noted that SBA, among other things, had inadequately monitored contract performance and did not provide assurance that products and services were delivered according to contract requirements; updated its acquisition SOP but did not include elements such as the use of interagency acquisitions or define postaward contract administration requirements, among other things; and did not complete the acquisition assessment required in OMB’s Memorandum for Chief Acquisition Officers: Conducting Acquisition Assessments under OMB Circular A-123. The OIG made recommendations to SBA to address these issues, including (1) completing an assessment of the agency’s acquisition activities using OMB guidance; and (2) creating and implementing a comprehensive improvement plan—based on the results of the acquisition function assessment—that has measurable goals, objectives, prioritized actions, and time frames to address any identified deficiencies. In response to these recommendations, SBA awarded a contract for an assessment of its acquisition function. This assessment—completed at the end of March 2015—included a functional assessment of SBA acquisition operations using OMB Circular A-123, Appendix 1, “Guidelines for Assessing the Acquisition Function,” which describes the four cornerstones of acquisition management: organizational alignment and leadership, policies and processes, human capital, and information management and stewardship. In its final report, the contractor noted that SBA’s decision to realign acquisition and procurement functions under the direction of the chief financial officer was consistent with the practices of many of the Chief Financial Officers Act agencies and other small independent agencies. However, the contractor found several shortcomings with the agency’s internal controls in each of the four cornerstones. For example, the contractor noted a lack of clarity on the roles and responsibilities of the various stakeholders in the acquisition process. While most people understood their general role in the acquisition lifecycle, their specific responsibilities were not clearly defined. The contractor noted that this lack of clarity lead to gaps and inconsistencies in the acquisition process. Similarly, while the contractor found that SBA had SOP documents for its acquisition process, for the most part they restated the Federal Acquisition Regulation guidelines and did not describe the processes as they exist or should exist at SBA. As such, these SOPs had limited value in creating operational consistency and did not serve as useful training tools for new members of the acquisition team. Further, the contractor noted that performance standards, while incorporated into all procurements, were generic and lacked the specificity to guide desired outcomes. The contractor made several recommendations to SBA to support the establishment, assessment, and correction of internal controls in these four areas. Using the findings and recommendations identified from the functional assessment, the contractor was to develop a formal plan that addressed each cornerstone and would serve as an action plan to implement the contractor’s recommendations for SBA’s acquisition operations. According to SBA officials, the contractor had delivered this plan to SBA as of May 2015, but the Office of the Chief Financial Officer had not finalized its plan and had no time frame for doing so. We found that SBA’s inventory of SOPs—guidance for its staff and external parties—included outdated SOPs that did not align with current program requirements as well as SOPs that the agency had previously canceled. Federal internal control standards state that documentation— which helps managers control their processes and is essential for evaluating and analyzing operations—must be properly managed and maintained to ensure proper stewardship of and accountability for government resources and effective and efficient program results. Based on our review of an inventory of 153 internal and external SOPs, we found that nearly half (71) had not been updated in the last 10 years. Furthermore, 36 of these 71 had not been updated since the 1990s, and 19 had not been updated since the 1980s. These SOPs covered a number of programs and organizational processes. While not all SOPs may need to be updated on a regular basis, federal internal control standards state that internal controls and all transactions and other significant events need to be clearly documented and that all documentation and records should be properly managed and maintained. We and the SBA OIG have found that some of the SOPs that SBA updated in the last few years did not align with current program requirements. For example, we found in a February 2015 report that SBA had updated its SOP for the HUBZone program in 2007 but that the program subsequently underwent significant changes that the updated document did not reflect. Similarly, as noted earlier, SBA overhauled its employee performance standards in 2011 but did not update its Performance Management and Appraisal System SOP from 2000. Finally, in a March 2011 report the SBA OIG found that the disaster loan servicing centers lacked a clearly defined records management and documentation process and therefore did not consistently make and preserve records containing adequate and proper documentation. Records that should have been preserved because they contained evidence of agency activities or information of value to the agency were not systematically maintained. As a result, the SBA OIG recommended that SBA make and preserve records containing adequate and proper documentation of procedures for its oversight of its loan servicing programs, including incorporating these procedures into the relevant SOPs. As of February 2015, SBA had not implemented the SBA OIG’s 2011 recommendations on updating the SOPs for its loan servicing procedures. In fiscal year 2014, SBA’s Office of the Chief Operating Officer, Office of Administrative Services began a review of the status of all SOPs, working with the program offices to determine whether any updates were needed. Specifically, SBA issued a notice requiring all office heads to certify in writing the status of their SOPs. For each SOP, the cognizant office was to note whether (1) it did not require any revision, (2) it was under review, (3) it was being revised, or (4) it was being canceled. If an SOP was deemed to fall within one of the last three categories, the office was to provide the date by which the action would be completed. As a result of that review, SBA created a spreadsheet that flagged some SOPs as outdated and some for cancelation. Of the 165 SOPs reviewed as of March 2015, SBA determined that 74 needed to be revised, 31 needed to be canceled, and 60 required no revision. SBA also determined that it needed to issue an additional 9 new SOPs (see app. IV for a list of all SOPs and their status). However, many of these outdated and canceled SOPs were still on SBA’s internal website in March 2015 when we requested an updated list, raising questions as to whether SBA staff and partners may be using outdated or canceled SOPs. Further, in most cases SBA’s spreadsheet did not include projected completion dates for revising or canceling old SOPs or creating new ones. SBA officials told us that they lacked resources to update their SOP inventory but were in the process of revising those that had been flagged for revision. They also told us that they planned to revise the SOP for the entire Records Management Program and send it to the administrator for approval in fiscal year 2015. In addition, a senior SBA official noted that several SOPs had been updated and were undergoing final review. Without setting time frames to help ensure that SOPs are properly maintained and periodically updated, it will be difficult for SBA to hold staff accountable for updating the SOPs as intended and to illustrate its progress in doing so. Moreover, without updated SOPs, agency staff and their partners may not have clear guidance on how to most effectively deliver program services in accordance with laws and regulations. In a February 2015 report, we identified the management of information technology acquisitions and operations as an area that presents a high risk to the federal government. We reported that too frequently agency IT efforts failed or incurred cost overruns and schedule slippages and did not meet mission goals due to challenges in managing the investments. The federal government has undertaken several initiatives to better manage its IT investments, with the goal of increasing efficiency and reducing costs. However, agencies continue to have poorly performing projects, such as SBA’s LMAS project, which we discussed earlier. While SBA has taken steps to implement six IT initiatives set out by OMB, it has not fully completed all of them. TechStat reviews. As part of the Federal Chief Information Officer’s (CIO) 25 Point Implementation Plan, in December 2010, OMB empowered agency CIOs to hold agency-level TechStat accountability reviews for investments that are at risk. According to OMB’s instructions, agency CIOs are to report the status of risk of their investments on the Federal IT Dashboard. Using the ratings (high, medium, or low risk), agencies determine whether or not to hold a TechStat review of an investment and decide, based on the evidence presented, whether to intervene to turnaround, halt, or terminate a project. OMB also required federal agencies to hold at least one TechStat review by March 2011. Based on the success of the TechStat initiative, OMB issued a requirement in August 2011 that agency CIOs continue holding such reviews. From December 2010 through February 2015, SBA reported three investments on the Federal IT Dashboard that exhibited moderately high risk. In March 2011, SBA held a TechStat session for one of those investments—the Homeland Security Presidential Directive (HSPD)-12, a system to implement a new government-wide standard for secure and reliable forms of identification for employees and contractors who access government-controlled facilities and information systems. As of February 2015, SBA’s HSPD-12 system was still considered to be at moderately high risk. SBA officials stated that this investment was rated moderately high risk because they had not yet implemented the requirements that would allow the identification card readers to work properly in the field. SBA did not hold a TechStat for the second investment rated moderately high risk—the Loan Accounting System—from July 2011 through February 2012. SBA changed the investment’s rating several times, from medium to low risk, and it has been rated low risk since June 2014. SBA officials attributed the low risk rating to the completion of associated projects in November 2014 that previously had not performed as expected. They added that the investments were being continually reviewed by SBA’s CIO and the Executive Steering Committee. The third investment rated moderately high risk from September 2014 through February 2015 was the Office of the CIO IT Infrastructure investment. SBA officials said that the moderately high-risk rating of this investment was due to a delay in data center consolidation efforts that resulted from a lack of funding. SBA officials stated that they had not conducted any additional TechStat reviews since the one held in 2011, explaining that under the previous CIO’s management, it had been decided that the agency would leverage other oversight and governance efforts for all of its IT investments. These efforts included discussing the overall performance of investments and prescribing action items when warranted to help ensure investments were on schedule, within budget, and meeting defined metrics. SBA said that as warranted, it would hold formal TechStat sessions in the future for underperforming IT investments. Operational analysis of IT investments. OMB guidance calls for agencies to develop a policy for examining the ongoing performance of existing IT investments to measure, among other things, whether the investment is continuing to meet business and customer needs and is contributing to meeting the agency’s strategic goals. The policy is to require annual operational analyses of the agency’s investments that address costs, schedules, customer satisfaction, strategic and business results, financial goals, and innovation. SBA officials said that operational analysis reports had not been prepared but that the CIO reviewed the investments in SBA’s portfolio every month. They added that the agency had focused on ensuring that investments were up-to-date and providing the accurate risk status, schedule, cost, and performance metrics for each. Although SBA officials told us that the agency’s operational investments were reviewed periodically, the 2014 reviews were not documented according to OMB guidance. As a result, we could not evaluate the extent to which the reviews followed OMB guidance for operational analyses. Agency officials also said that they planned to conduct an analysis on each investment in fiscal year 2015. However, as of July 2015, no such analyses had been documented. Until SBA ensures that all its existing investments are fully assessed and that all reviews are appropriately documented, it will not be able to determine whether its IT investments are meeting their intended objectives, increasing the risk of inefficient spending. Federal data center consolidation. In February 2010, the Federal CIO established the Federal Data Center Consolidation Initiative to address the growing number of federal data centers. This initiative’s four high-level goals were to reduce the overall energy consumption and real estate footprint of government data centers; reduce the cost of data center hardware, software, and operations; increase the overall IT security posture of the government; and shift IT investments to more efficient computing platforms and technologies. OMB guidance requires SBA and other federal agencies to submit an updated data center inventory that includes 4 elements and a consolidation plan with 13 elements. In July 2011 and July 2012 reports, we found that SBA had developed plans to consolidate its large data centers from four to two by December 2015 but that its inventory and plans for its data centers did not address all the elements required by OMB guidance. Furthermore, OMB guidance required agencies to describe year-by- year investments and cost savings in their 2010 and 2011 consolidation plans. Beginning in August 2013, agencies were to identify and report all cost savings and avoidances related to data center consolidation, among other areas, as part of a quarterly data collection process. In the 2012 report, we found that SBA’s June 2011 update to its inventory and plans included some inventory elements that had been completed and others that had not. Specifically, the update included 2 completed inventory elements (of 4) and 6 plan elements (of 13); 2 partially completed inventory elements and 2 plan elements, and 5 plan elements with no information. SBA officials stated that several missing elements, such as performance metrics, a schedule, and a risk management strategy, had been developed after the plan’s completion. We emphasized the importance of fully implementing the recommendation from our 2011 report that SBA complete the missing elements from its inventories and plans. SBA neither agreed nor disagreed with our recommendation, but has since taken the steps necessary to implement it. PortfolioStat. In March 2012, OMB launched the PortfolioStat initiative, which requires SBA and other federal agencies to conduct an annual agency-wide portfolio review of all their IT investments to, among other things, reduce commodity IT spending and demonstrate how IT investments align with the agency missions and business functions. PortfolioStat is designed to assist agencies in assessing the current maturity of their IT portfolio management process, making decisions on eliminating duplication, and moving to shared services in order to maximize the return on IT investments across the portfolio. OMB established several requirements for agencies in implementing PortfolioStat, including reporting estimated savings and cost avoidances associated with consolidation and shared services initiatives through fiscal year 2015 and completing a final action plan that addressed additional elements. In a November 2013 report on the progress SBA and other agencies had made in conducting PortfolioStat reviews, we found that SBA had held a PortfolioStat review for some IT investments but completed only some of the OMB requirements for conducting the reviews. Specifically, SBA had designated a PortfolioStat lead, completed an IT portfolio survey, held a PortfolioStat meeting, completed two migration efforts, and reported lessons learned. The agency reported that it had identified six PortfolioStat initiatives, of which four had resulted in cost savings of about $800,000. However, SBA had not completed its commodity IT baseline because it had not identified a process for ensuring the completeness of the baseline information. Additionally, SBA had not completed an action plan with all the required elements. Specifically, while its plan fully addressed four elements, it partially addressed three, and did not address one. We recommended that the agency take two actions: (1) develop a complete commodity IT baseline and (2) fully describe the required PortfolioStat action plan elements in future reporting to OMB. SBA did not agree nor disagree with our recommendations, but SBA officials told us that as of May 2015, they had begun to take steps to address them. For example, SBA officials told us that the agency had procured tools that would help it develop its commodity IT baseline and that it was reporting to OMB quarterly on the status of its implementation of action plan elements. Cloud computing strategy. In order to accelerate the adoption of cloud computing services across the government, OMB’s 25 Point Plan included a “Cloud First” policy. This policy requires each agency CIO to fully migrate three services by June 2012 to an Internet-based cloud service providing computing services and resources, and implement cloud services whenever a secure, reliable, and cost- effective cloud option is available. Building on this requirement, in February 2011, OMB issued the Federal Cloud Computing Strategy, which provided definitions of cloud computing services; benefits of the services, such as accelerating data center consolidations; case studies to support agencies’ migration to cloud computing; and roles and responsibilities for federal agencies. In a July 2012 report, we found that SBA had implemented one cloud- based computing service by OMB’s deadline of June 2012, and an additional two by the end of 2012, but that the agency’s plans for doing so lacked estimated costs and performance goals. In addition, the agency had not developed any plans for additional cloud- based services. We recommended that SBA establish estimated cost and performance goals for additional cloud-based services and that it develop, at a minimum, estimated costs, milestones, performance goals, and plans for retiring legacy systems, as applicable, for additional cloud-based services. SBA officials responded that the agency would work to implement the recommendations. In May 2015, SBA officials told us that they were continuing to implement the agency’s cloud computing strategy and that they were in the early stages of implementing a cloud e-mail solution to replace the agency’s legacy e-mail system. In addition, in September 2014, we evaluated SBA’s progress in implementing cloud services and the extent to which it had experienced cost savings in light of OMB guidance that called for agencies to assess all IT services for migration to a cloud service irrespective of the investment’s age. SBA had not assessed a majority of its IT investments for cloud computing services. We recommended that SBA direct its CIO to ensure that all IT investments be assessed for suitability for migration to a cloud computing service and establish evaluation dates for each investment. SBA officials concurred. In June 2015, SBA officials told us that they had revised their SOP in response to the recommendation and that they discussed cloud computing options at each investment review. However, SBA did not document these discussions, so it is not clear the extent to which they addressed the factors required by OMB. Software licensing management. Two executive orders address the effective management of agency software licenses. Executive Order 13103 requires that federal agencies adopt policies and procedures to ensure that only computer software not in violation of copyright laws is being used. In addition, Executive Order 13589 promotes efficient spending at federal agencies to include assessments of inventories of current devices and their usage and the establishment of controls to ensure that agencies are not paying for unused or underused IT equipment, installed software, or services. In our May 2014 report on the management of software licenses across the federal government, including the extent to which SBA and other federal agencies had developed appropriate policies on software management licensing and were adequately managing licenses for their software, we found that SBA had not addressed all of the seven elements of a comprehensive license policy and had not implemented all five of the leading initiatives for managing software licenses. Specifically, SBA did not have any SOPs or a general policy to manage all software licenses agency-wide and had only partially addressed two out of the five leading initiatives for managing software licenses. We recommended that SBA implement six actions to ensure the effective management of its software licenses. SBA neither agreed nor disagreed with our recommendations, but SBA officials told us that as of June 2015, they had started taking actions to address some of them. For example, SBA officials told us that they had drafted a software licensing policy that was under review and were working on a software license inventory. While these are positive initial steps, until SBA fully implements our recommendations on software management, it lacks assurance that it can cost- effectively manage its software. SBA is a relatively small agency with a large mission—to help Americans start, build, and grow small businesses by overseeing programs that provide tens of billions of dollars in support to these enterprises. To fulfill this important role, it is essential that SBA better plan and oversee several of its key management areas across the agency. For example, SBA has developed an agency-wide strategic plan that meets all GPRAMA requirements on the process used to develop the plan and all but one requirement on the contents of the plan. SBA’s plan only partially meets the requirement that the agency describe how it used program evaluations in setting strategic goals and include a schedule of evaluations planned. In addition, although GPRAMA aims to ensure that agencies use performance information in decision making and OMB has encouraged agencies to improve government effectiveness by increasing their use of evidence and rigorous program evaluation in making budget, management, and policy decisions, SBA has a poor track record of conducting program evaluations and told us it has limited resources to conduct them. However, conducting more program evaluations could help SBA assess program effectiveness and learn how to improve program performance. In addition, relying on program evaluations to help set strategic objectives could better enable SBA to assess the appropriateness and reasonableness of its goals and objectives and the effectiveness of the strategies used to meet them. Further, SBA faces ongoing challenges in several key management areas, including workforce planning and training and development. SBA currently does not have a workforce plan that fully addresses key principles—including conducting and acting upon a competency and skill gap assessment and developing a long-term strategy to address its skills imbalance. As a result, it cannot provide reasonable assurance that its workforce has the skills needed to effectively administer the agency’s programs and meet the agency’s mission and strategic goals. Similarly, SBA lacks a strategic approach to its training and development programs, such as incorporating goals and measures and input on employee development goals in its training plan. Without such an approach, SBA cannot establish priorities in its training initiatives or address skill gaps to help ensure that employees can effectively deliver programs and meet SBA’s strategic objectives. Changes to SBA’s organizational structure have contributed to the agency’s skill gaps and, in some cases, have affected program oversight. Federal internal control standards state that an agency’s organizational structure should clearly define key areas of authority and responsibility and establish appropriate lines of reporting. SBA committed in 2012 to revising its organizational structure and planned some workforce restructuring related to its most recent voluntary retirement program. However, SBA had not documented efforts to assess its organizational structure as of August 2015 or completed its workforce restructuring as of June 2015. Until it documents its examination of its structure and any findings, SBA will not have an institutional record of its actions. Thus, it will be difficult for SBA to provide reasonable assurance, or for a third party to validate, that SBA’s current organizational structure contributes effectively to its mission objectives and programmatic goals. SBA has initiated efforts to identify and manage risks facing the agency at an enterprise level. However, SBA lacks documentation on its progress and future plans required by federal internal control standards and has not incorporated elements of our risk management framework, such as including goals or specific actions. Without identifying and documenting the steps that it plans to take to implement its risk management process and incorporating the elements of our framework, SBA cannot provide reasonable assurance that its efforts effectively identify, assess, and manage risks before they adversely affect SBA’s ability to achieve its mission. SBA also faces challenges in managing and maintaining adequate records. We found that many of its SOPs were outdated and did not reflect program and operating changes. The lack of up-to-date guidance can affect program delivery. Internal control activities, such as establishing the policies, procedures, and techniques for SBA programs and maintaining them through periodic updates, are essential mechanisms that help ensure that management’s directives are carried out. Without setting time frames to help ensure that SOPs are properly maintained and periodically updated, it will be difficult for SBA to hold managers accountable for completing them as intended and to demonstrate progress in doing so. Moreover, without updated SOPs, agency staff may lack clear guidance and therefore may not effectively deliver program services in accordance with laws and regulations. Finally, SBA has taken steps to implement aspects of several key IT management initiatives. However, the agency has not developed a policy for conducting regular operational analyses of all of its investments. SBA officials told us that all IT investments were reviewed regularly but could not provide documentation of recent assessments. Without these analyses, SBA cannot provide reasonable assurance that it is effectively managing its IT investments so that they meet the agency’s mission and goals, increasing the risk of inefficient spending on IT investments. In addition, as SBA begins to implement new systems intended to improve agency operations such as SBA One, fully applying accepted IT management initiatives to its efforts will be important. Otherwise, SBA risks them not meeting cost, schedule, or capability goals. We make the following eight recommendations to improve management of the Small Business Administration. 1. To ensure that SBA assesses the effectiveness of its programs, we recommend that the SBA Administrator prioritize resources to conduct additional program evaluations. 2. To ensure that SBA fully meets GPRAMA requirements, we recommend that the SBA Administrator use the results of additional evaluations it conducts in its strategic planning process and ensure the agency’s next strategic plan includes required information on program evaluations, including a schedule of future evaluations. 3. To improve SBA’s human capital management, we recommend that the SBA Administrator complete a workforce plan that includes key principles such as a competency and skill gap assessment and long- term strategies to address its skill imbalances. 4. To improve SBA’s human capital management, we recommend that the SBA Administrator incorporate into its next training plan key principles such as goals and measures for its training programs and input on employee development goals. 5. To ensure that SBA’s organizational structure helps the agency meet its mission, we recommend that the SBA Administrator document the assessment of the agency’s organizational structure, including any necessary changes to, for example, better ensure areas of authority, responsibility, and lines of reporting are clear and defined. 6. To ensure that SBA can effectively identify, assess, and manage risks, we recommend that the SBA Administrator develop its enterprise risk management consistent with GAO’s risk management framework and document the specific steps that the agency plans to take to implement its enterprise risk management process. 7. To improve SBA’s program and management guidance, we recommend that the SBA Administrator set time frames for periodically reviewing and updating its SOPs as appropriate. 8. To help ensure that SBA’s IT operations and maintenance investments are continuing to meet business and customer needs and the agency’s strategic goals, we recommend that the SBA Administrator direct the appropriate officials to perform an annual operational analysis on all SBA investments in accordance with OMB guidance. We requested comments from SBA on a draft of this report, and the agency provided written comments that are presented in appendix V. SBA stated that overall it generally agreed with our recommendations. In response to our recommendation that the SBA administrator complete the assessment of the agency's organizational structure and make any necessary changes, SBA concurred but noted that the agency had initiated a full review of its organizational structure shortly after the current administrator was confirmed in April 2014. SBA noted that the agency had recently completed that review and determined that major restructuring was not warranted at the time. However, SBA has not provided us with any documentation of its assessment, either during the course of our review or when it provided comments on a draft of this report. Therefore, we revised the recommendation to clarify that SBA document its assessment, including the results and any changes. In response to our two recommendations that SBA conduct additional program evaluations and use the results in its strategic planning process, SBA generally agreed with our recommendations but noted that it would face challenges in implementing them. Specifically, SBA stated that it was currently restricted from collecting data on small businesses from some resource partners. Further, SBA said that it did not have adequate information collection systems for some programs that could house and assess the data needed for evaluations. SBA also noted that independent evaluation studies consistent with OMB and GAO preferred methodologies would be costly. As a result, SBA said its ability to satisfy the recommendations would depend, at least in part, on the agency's ability to receive funding for them or find other viable evaluation methods. SBA stated that in prior years the agency had requested additional funding to conduct independent evaluations but had not received it. The agency agreed with our two recommendations related to SBA’s human capital management, adding that a workforce plan was under development that would address the need for competency and skills gap analysis and that contained strategies to address skill imbalances through recruitment and training. SBA also noted that its projected analyses would include goals and measures for training and development. SBA added that it had procured a contractor to perform a competency gap analysis and that additional contractor support would allow for a final draft of a workforce plan in early fiscal year 2016. SBA agreed with our recommendation regarding the agency’s enterprise risk management process. SBA said that the agency was working to document its process, which is to include aspects of various available frameworks, including those published by GAO, the Committee of Sponsoring Organizations of the Treadway Commission, and the International Organization for Standardization. Further, SBA agreed with our recommendation on updating SOPs. In its comment letter, SBA noted that the agency was conducting annual reviews of SOPs and working to streamline the clearance process for the publication of new and updated SOPs. However, as we stated in the report, SBA’s most recent effort to review its SOPs began in fiscal year 2014, and in most cases the results did not include projected completion dates for revising or canceling old SOPs or creating new ones. SBA also concurred with our recommendation on performing an annual operational analysis on all SBA IT investments in accordance with OMB guidance. SBA also provided several technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to SBA and appropriate congressional committees. This report also will be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This report examines the extent to which the U.S. Small Business Administration (SBA) (1) has addressed previously identified management challenges related to specific programs, including those related to internal controls; (2) is following federal requirements for strategic planning; (3) is following key principles or internal controls for human capital management, organizational structure, enterprise risk management, acquisition management, and procedural guidance; and (4) is making progress in implementing the Office of Management and Budget’s (OMB) high-priority management practices for information technology. For the background, we analyzed data on staffing levels at headquarters, regional, and district offices in fiscal year 2014. To assess the reliability of these data, we interviewed SBA officials from the Office of Human Resources Solutions to gather information on the completeness and accuracy of the full-time equivalent database and examined the data for logical inconsistencies and completeness. We determined that the data were sufficiently reliable for the purposes of reporting on staffing levels. To address our objectives, we reviewed relevant federal laws and regulations and interviewed SBA officials, including headquarters officials, all 10 regional administrators, management and nonmanagement staff at 10 district offices, and union representatives. We interviewed headquarters officials within the following SBA offices: Capital Access, Entrepreneurial Development, Government Contracting and Business Development, Disaster Assistance, Performance Management and Chief Financial Officer, Field Operations, and Human Resources Solutions. To obtain perspectives from SBA district office officials on our objectives, we selected a purposive sample of 10 of the 68 district offices, 1 from each SBA region to provide national coverage. We randomly selected 7 of the 10 district offices from those offices located within the continental United States. We selected the Washington, D.C., and Georgia district offices to pre-test our interview questions because of proximity to GAO offices. We selected the New York district office to include an additional large office to ensure a variety of offices with both a larger and smaller number of employees. During our on-site meetings at the 10 district offices, we interviewed managers such as district directors and deputy district directors at each location. For our interviews with nonmanagement staff at the 10 district offices, district office management invited any interested nonmanagement staff to meet with us. However, SBA required the presence of district counsel during these interviews. Participation by nonmanagement staff members in the interviews was limited. Specifically, out of approximately 120 nonmanagement employees in the 10 district offices that were invited to speak with us, a total of 28 employees attended the interviews. To allow any nonmanagement staff who did not participate in our on-site interviews an additional opportunity to share their thoughts on our objectives, we sent an e-mail to all nonmanagement staff at those 10 district offices, inviting them to share their thoughts on specific topics with us by sending an e-mail to a specified GAO e-mail address. Nine staff members responded to our e-mail and provided us with information. The results of our interactions with the 10 district offices cannot be generalized to other SBA district offices. The union representatives we interviewed were from headquarters and the field. To assess the status of SBA’s management challenges related to specific programs, we reviewed annual SBA Office of the Inspector General (OIG) reports on SBA management challenges for fiscal years 2000 through 2015 (the years for which reports were available on the SBA OIG’s website). We reviewed the challenges noted in the fiscal year 2015 report, including the fiscal year in which each challenge was first reported by the OIG. We also reviewed GAO reports issued during this time frame to identify those which dealt with SBA management challenges. Finally, we reviewed information from GAO’s system for tracking agency recommendations to determine which recommendations made to SBA in fiscal years 2010 through 2013 remained open. To evaluate SBA’s strategic planning efforts, we reviewed relevant SBA documents such as its Strategic Plan Fiscal Years 2014-2018 and Fiscal Year 2016 Congressional Budget Justification and Fiscal Year 2014 Annual Performance Report. We compared SBA’s strategic planning and reporting practices with requirements in the Government Performance and Results Act of 1993 (GPRA), as updated by the GPRA Modernization Act of 2010 (GPRAMA). Specifically, we first identified GPRA and GPRAMA requirements related to the elements that must be included in a federal agency’s strategic plan, such as a mission statement and goals and objectives, and requirements related to the strategic planning process, such as obtaining stakeholder input. We then reviewed SBA’s strategic plan to determine whether it included the required elements and interviewed SBA officials to determine the process SBA used to develop the strategic plan. We assessed the extent to which SBA met each requirement using three categories. “Met” indicates that, in our judgment, SBA met all or mostly all aspects of the requirement. “Partially met” indicates that it met some but not all or mostly all aspects of the requirement. “Not met” indicates that it did not meet the requirement. Specifically, one GAO analyst identified the strategic planning requirements, reviewed SBA’s practices, and made the initial assessment for each requirement reviewed. A second analyst then verified each of these steps to ensure consistent results. To assess SBA’s human capital management practices, we reviewed SBA documents such as its Strategic Human Capital Plan Fiscal Years 2013-2016, Annual Training Plan (Talent Development Initiative) Fiscal Year 2014-2015, two assessments of its training programs, and standard operating procedures (SOP). We also reviewed SBA’s 2014 Federal Employee Viewpoint Survey results on training and employee performance management. To assess the reliability of these data, we reviewed the methodology used to conduct the survey and SBA’s response rate. We determined that the data we used were sufficiently reliable for the purposes of reporting on employees’ perspectives on training offered by SBA and on SBA’s employee performance management system. In addition, we reviewed a list of individuals who had served in senior-level positions at SBA from calendar years 2005 through 2015 (the years for which information on all these positions was available) and determined the number of individuals who served in each position during this time frame. We also compared SBA’s human capital practices in the areas of workforce planning and training with key principles identified in our previous work. Specifically, we reviewed SBA’s human capital documents and interviewed SBA officials to determine SBA’s workforce planning and training practices. We then assessed the extent to which SBA followed each key principle, again using three categories. “Follows” indicates that, in our judgment, SBA was following all or mostly all aspects of the principle. “Partially follows” indicates that it was following some but not all or mostly all of the aspects of the principle. “Does not follow” indicates that it was not following any aspects of the principle. Specifically, one GAO analyst reviewed SBA’s policies and practices and made the initial assessment. A second analyst then verified these steps to ensure consistent results. Because SBA was in the process of updating the SOP on its employee performance management system, we were unable to conduct an assessment of SBA’s employee performance management system to determine whether it was consistent with key principles in employee performance management that we identified in a 2003 report. Instead we reviewed a set of critical elements and performance standards used to evaluate employee performance. Specifically, we reviewed critical elements and performance standards for the following managers and employees in the field: regional administrators, district directors, deputy district directors, economic development specialists, business opportunity specialists, and lender relations specialists. We requested these because they are the elements and standards for the staff whom we interviewed during our site visits. In addition, we reviewed one set of generic critical elements and performance standards for SBA employees and one set for SBA managers. SBA officials stated that they could not provide data on the total number of critical elements because the elements are tracked on an individual basis and not across the agency. To review SBA’s organizational structure, we reviewed prior GAO and SBA OIG reports that discussed, among other things, the effect of the agency’s structure on its human capital management and program oversight. We also examined documentation on changes to SBA’s organizational structure from fiscal year 2005 to 2014 (the period after SBA’s last major reorganization in 2004). Specifically, we requested and reviewed all of the forms that SBA used to document organizational changes that were approved during this period. We also reviewed documentation on SBA’s planned efforts to assess its organizational structure—including its Strategic Human Capital Plan Fiscal Years 2013- 2016, guidance implementing its fiscal year 2014 Voluntary Early Retirement Authority (VERA) and Voluntary Separation Incentive Payments (VSIP) programs, and the statement of work for a contractor’s assessment of organizational structure—and compared these plans to federal internal control standards and guidance related to organizational structure. To evaluate SBA’s risk management, we compared SBA’s enterprise risk management practices with GAO’s Standards for Internal Control in the Federal Government and risk management criteria on elements of risk management. Specifically, we reviewed SBA’s limited documentation on its enterprise risk management process and implementation plans for each step of the process and then assessed the extent to which SBA’s process was consistent with the stages of our risk management framework, again using three categories. “Follows” indicates that, in our judgment, SBA was following all or mostly all aspects of the practice. “Partially follows” indicates that it was following some but not all or mostly all of the aspects of the practice. “Does not follow” indicates that it was not following any aspects of the practice. For acquisition management, we reviewed (1) data on SBA contracts awarded to small businesses from Small Business Goaling Reports for fiscal years 2011-2013 to assess whether SBA met contracting goals in the last 3 years and (2) SBA contract obligations for fiscal year 2014 as reported in the Federal Procurement Data System-Next Generation to determine types of spending. To assess the reliability of these data, we reviewed documentation on the data and assessed them for consistency and completeness. We determined that the data were sufficiently reliable for the purposes of reporting on SBA’s contracting efforts. We also reviewed recent SBA OIG reports on SBA’s acquisition management (including whether the agency’s practices align with guidelines developed by OMB). In addition, we examined SBA’s efforts to improve its acquisition management, including reviewing documentation of its 2014 contract with an outside entity to conduct an assessment of SBA’s acquisition function to determine the scope and methodology of that review. We also reviewed documentation of the results of that assessment. For procedural guidance, we asked SBA to provide us with an inventory of all current SOPs, including the dates of the most recent revisions. In July 2014, SBA provided a list of SOPs maintained on the agency’s internal website. We also reviewed the agency’s external website and found additional SOPs, identifying a total inventory of 153 internal and external SOPs. We reviewed this inventory of SBA’s SOPs to determine when they were last updated and compared SBA’s guidance to federal internal control standards to determine if it met the standards for documentation. We also reviewed documentation related to SBA’s fiscal year 2014 efforts to conduct an assessment of the status of all SOPs across the agency. In fiscal year 2014, SBA’s Office of the Chief Operating Officer, Office of Administrative Services began a review of the status of all SOPs, working with the program offices to determine whether any updates were needed. SBA issued a notice requiring all office heads to certify in writing the status of their SOPs. For each SOP, the cognizant office was to note whether (1) it did not require any revision, (2) it was under review, (3) it was being revised, or (4) it was being canceled. If an SOP was deemed to fall within one of the last three categories, the office was to provide the date by which the action would be completed. As a result of that review, SBA created a spreadsheet that flagged some SOPs as outdated and some for cancelation. We reviewed SBA’s spreadsheet to determine the number of SOPs that needed to be revised, canceled, or required no revision. To assess SBA’s progress in implementing high-priority management practices for information technology (IT), we evaluated SBA’s progress on six OMB IT initiatives. We used the relevant sections of recent GAO reports to report on SBA’s efforts on the initiatives, interviewed SBA officials about recent steps that they have taken to implement them, and analyzed SBA TechStat documentation to determine when past TechStat sessions were held and to identify the outcomes of the reviews. We reviewed SBA’s ratings on the IT Dashboard to determine if SBA had held a TechStat for the at-risk investments. To corroborate the data reliability of those ratings, we interviewed SBA officials to determine their process for collecting, updating, and maintaining the data and asked them to verify the data’s completeness and accuracy. We determined that the data were sufficiently reliable for the purposes of reporting on TechStat reviews of at-risk investments. We analyzed SBA’s operational analyses to determine if, within the past year, the agency had performed such analyses on all of its major IT investments in the operations and maintenance phase. We conducted this performance audit from March 2014 through September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In this appendix, we provide additional information on the Small Business Administration’s (SBA) budget from fiscal year 2004 through fiscal year 2016 (requested). Specifically, table 6 provides information on total obligations, net outlays, and gross budget authority agency-wide. In addition, table 7 breaks down these amounts according to Office of Management and Budget (OMB) accounts. Finally, figure 8 compares the gross budget authority amounts by OMB account to the total gross budget authority for each of these fiscal years. As of July 2015, 69 of the recommendations that we have made to the Small Business Administration (SBA) were open. See table 8 for a list of these open recommendations by subject area. In fiscal year 2014, the Small Business Administration’s (SBA) Office of the Chief Operating Officer, Office of Administrative Services began a review of the status of all standard operating procedures (SOP), working with the program offices to determine whether any updates were needed. Specifically, SBA issued a notice requiring all office heads to certify in writing the status of their SOPs. For each SOP, the cognizant office was to note if it (1) did not require any revision, (2) was under review, (3) was being revised, or (4) was being canceled. As a result of this review, SBA determined that 74 SOPs needed to be revised (see table 9); 31 needed to be canceled (see table 10); and 60 required no revision (see table 11). SBA also determined that it needed to issue an additional 9 new SOPs (see table 12). In addition to the contact named above, David Powner (Director), A. Paige Smith (Assistant Director), James Sweetman, Jr. (Assistant Director), Deena Richart (Analyst-in-Charge), Gerard Aflague, Emily Chalmers, Elizabeth Curda, Pamela Davidson, Nancy Glover, Meredith Graves, Kaelin Kuhn, John McGrail, Marc Molino, Erika Navarro, Meredith Raymond, William Reinsberg, and Gloria Ross made key contributions to this report.
SBA has provided billions of dollars in loans and guarantees to small businesses. As of March 31, 2015, SBA’s total loan portfolio was about $116.9 billion, including $110.3 billion in direct and guaranteed loans and $6.6 billion in disaster loans. GAO has previously reported on management challenges at SBA. GAO was asked to review SBA management, including whether those challenges were ongoing. This report discusses SBA’s efforts to address management challenges related to specific programs and internal controls. It also looks at challenges in strategic planning, human capital, organizational structure, enterprise risk, procedural guidance, and IT. To do this work, GAO reviewed SBA policies and compared them with federal requirements, key principles for human capital management, and internal control standards. GAO also interviewed officials at SBA headquarters, all 10 regional offices, and 10 of 68 district offices selected on the basis of location and size. The Small Business Administration (SBA) has not resolved many of its long-standing management challenges due to a lack of sustained priority attention over time. Frequent turnover of political leadership in the federal government, including at SBA, has often made sustaining attention to needed changes difficult (see figure below). Senior SBA leaders have not prioritized long-term organizational transformation in areas such as human capital and information technology (IT). For example, at a 2013 hearing on SBA's budget, the committee Chairman stated that SBA's proposed budget focused on the agency's priorities but ignored some long-standing management deficits. This raises questions about SBA's sustained commitment to addressing management challenges that could keep it from effectively assisting small businesses. Many of the management challenges that GAO and the SBA Office of Inspector General (OIG) have identified over the years remain, including some related to program implementation and oversight, contracting, human capital, and IT (see figure below). SBA has generally agreed with prior GAO recommendations that were designed to address these issues and other challenges related to the lack of program evaluations. The agency has made limited progress in addressing most of these recommendations but has recently begun taking some steps. A senior SBA official told us that improving human capital management, IT, and the 8(a) program (a business development program) were priorities for the new administrator. For example, he stated that SBA was exploring creative ways to recruit staff and plans to expand SBA One—a database currently used to process loan applications—to include the 8(a) program. Also, SBA has begun addressing some internal control weaknesses that GAO and the SBA OIG identified as contributing to the agency's management challenges. SBA officials noted that the agency had begun to update its standard operating procedure (SOP) on internal controls and planned more revisions after the Office of Management and Budget (OMB) updated its Circular A-123, which is expected to include guidance on implementing GAO's 2014 revisions to federal internal control standards. OMB issued a draft of the revised circular in June 2015 and is reviewing comments it received. GAO makes eight new recommendations designed to improve SBA's program evaluations, strategic and workforce planning, training, organizational structure, ERM, procedural guidance, and oversight of IT investments. SBA generally agreed with these recommendations and provided additional context. In response, GAO clarified one of its recommendations. GAO also maintains that 69 recommendations it made in prior work have merit and should be fully implemented.
The Department of Defense (DOD) spends about $8 billion annually to provide housing for families of active-duty military personnel. Seeking to provide military families with access to adequate, affordable housing, DOD either pays cash allowances for families to live in private sector housing or assigns families to government-owned or government-leased units. The housing benefit is a major component of the military’s compensation package. DOD Policy Manual 4165.63M states that private sector housing in the communities near military installations will be relied on as the primary source of family housing. About 569,000, or two-thirds, of the military families in the United States live in private housing. These families receive assistance in locating private housing from housing referral offices operated at each major installation and are paid housing allowances to help defray the cost of renting or purchasing housing in local communities. Housing allowances, which totaled about $4.3 billion in fiscal year 1997, cover about 80 percent of the typical family’s total housing costs, including utilities. The families pay the remaining portion of their housing costs out of pocket. The remaining 284,000, or one-third, of the military families in the United States live in government-owned or -leased housing. These families forfeit their housing allowances but pay no out-of-pocket costs for housing or utilities. In fiscal year 1997, DOD spent about $3 billion to operate and maintain government-owned and -leased family housing. In addition, about $976 million was authorized to construct and renovate government family housing units in fiscal year 1997. Unaccompanied and single enlisted personnel in lower paygrades normally are required by service policy to live in government-owned barracks when space is available. Single officers and single senior enlisted personnel usually can choose to live in civilian housing and receive housing allowances. According to DOD officials, the military services face three significant housing problems. First, in March 1998, a DOD official testified before the Congress that about 200,000 of the military-owned family housing units were old, had not been adequately maintained and modernized, and needed to be renovated or replaced. Using traditional military construction (Milcon) financing at current funding levels, DOD has estimated that over $20 billion and 30 to 40 years would be required to accomplish this task. Second, according to DOD estimates, about 15 percent of the military families living in private housing are considered unsuitably housed primarily because of the high cost of the housing in relation to their housing allowances. Third, DOD officials have stated that most of DOD’s 400,000 barracks spaces also are old, do not meet current suitability standards, and need major improvements estimated to cost about $9 billion using traditional funding methods. DOD has undertaken several initiatives to address these problems, including requests to the Congress for more housing construction funding and increased housing allowances to make privately owned housing more affordable to military members. The Congress approved DOD’s request for a new housing allowance program starting in January 1998. The new allowance program replaced the Basic Allowance for Quarters and Variable Housing Allowance with a single allowance designed to better match the allowance amount with the cost of housing in each geographic area. Under the new program, housing allowances will be determined on the basis of costs for suitable civilian housing in each geographic area and allowance increases will be tied to growth in housing costs. According to DOD, the new allowance program should result in higher allowances in expensive housing areas and could result in lower allowances in some low-cost housing areas. The higher allowances in some areas could result in increasing the quantity of housing that is considered affordable to military families. Under the old program, housing allowances often did not keep up with changes in housing costs and in many cases servicemembers paid higher out-of-pocket costs than originally intended. The new allowance program is being phased-in over a 6-year period because of budget considerations and the desire to keep any allowance reductions gradual. To improve its existing family housing and barracks inventory more economically and at a faster rate, DOD concluded that a new initiative was needed. The new initiative, known as the Military Housing Privatization Initiative, called for new authorities to allow and encourage private sector financing, ownership, operation, and maintenance of military housing. In May 1995, DOD requested the Congress to approve a variety of new authorities that, among other things, would allow DOD to (1) provide direct loans and loan guarantees to private entities to acquire or construct housing suitable for military use, (2) convey or lease existing property and facilities to private entities, and (3) pay differential rent amounts in addition to the rent payments made by military tenants. The new authorities would also allow DOD to make investments, both limited partnership interests and stock and bond ownership, to acquire or construct housing suitable for military use and permit developers to build military housing using room patterns and floor areas comparable to housing in the local communities. The authorities could be used individually or in combination. Appendix I contains a complete list and description of the authorities. The Congress approved the new authorities, and the initiative was signed into law on February 10, 1996. However, the Congress limited the new authorities to a 5-year test period to allow DOD to assess their usefulness and effectiveness in improving the military housing situation. Based on the results of the test, the Congress will consider whether the authorities should be made permanent. The basic premise behind the initiative is for the military to take advantage of the private sector’s investment capital and housing construction expertise. DOD has noted that the private sector has a huge pool of housing investment capital. By providing incentives, such as loan guarantees or co-investments of land or cash, the military can encourage the private sector to use private investment funds to build or renovate military housing. Use of private sector capital can reduce the government’s near-term outlays for housing revitalization by spreading costs, specifically increased amounts for housing allowances, over a longer term. DOD’s goal is to have the private sector to invest at least $3 in military housing development for each dollar that the government invests. By leveraging government funds by a minimum of 3 to 1, the military can stretch its available construction funds so that significantly more housing can be revitalized in comparison with traditional Milcon financing. DOD officials stated that, with leveraging, the housing problem could be solved with current funding levels in only 10 years. DOD also noted that privatization can reduce the average cost of military housing through the use of commercial specifications and standards and local building codes and practices. A DOD housing official stated that the military’s cost for a house built with Milcon funding—about $135,000, excluding land—is substantially higher than private industry averages, primarily due to government procurement practices and overly detailed specifications. Under Milcon financing, contractors normally are faced with specifications, standards, and housing sizes different from industry or local practices. As a result, some contractors do not compete for these jobs and those that do often raise their prices to cover the higher costs associated with the requirements. According to DOD, use of commercial building standards and practices can also reduce costs by increasing competition and by reducing developer risk because the homes are more marketable to nonmilitary families, if not used by servicemembers and their families. In September 1995, in anticipation of the enactment of the new authorities, DOD established the Housing Revitalization Support Office (HRSO) to facilitate implementation of the initiative. With a staff of 16 full-time personnel and support from consultants, HRSO is responsible for overseeing and assisting the services in using the new initiative. The individual services are responsible for nominating potential privatization projects; working with HRSO in reviewing projects and recommending which authorities should be used; preparing requests for proposals; and managing the contract competition, award, and implementation processes. Under the privatization initiative, the Office of Management and Budget (OMB) and DOD have agreed on guidance regarding the amount that should be recognized and recorded as an obligation of DOD at the time a privatization agreement is signed. The guidance refers to this process as scoring. In this report, we use the word “scoring” to refer to the application of this guidance to agreements made under the privatization initiative. Funding for the privatization initiative is accomplished through two funds established by the authorizing legislation—the DOD Family Housing Improvement Fund and the DOD Military Unaccompanied Housing Improvement Fund. The funds receive sums by direct appropriations and transfers from approved Milcon projects and from proceeds from the conveyance or lease of property or facilities. The two funds are used to implement the initiative, including the planning, execution, and administration of privatization agreements. The two funds must be managed separately and amounts in the two funds cannot be commingled. Table 1.1 shows the sources and uses of funds in the DOD Family Housing Improvement Fund for fiscal years 1996 and 1997. No appropriations were made to the fund for fiscal year 1998. In fiscal year 1997, $5 million was appropriated for the DOD Military Unaccompanied Housing Improvement Fund. About $100,000 from this fund was used to pay for an Air Force study on developing privatized unaccompanied housing projects. Because it represents a new approach to improving military housing, we reviewed the implementation of the Military Housing Privatization Initiative to (1) measure progress to date, (2) assess issues associated with privatizing military housing, and (3) determine whether the initiative is being integrated with other elements of DOD’s housing program. We performed work at HRSO and the DOD offices responsible for housing management and housing allowances. We also performed work at the Air Force, the Army, the Navy, and the Marine Corps headquarters offices responsible for implementing the initiative and at the OMB office responsible for reviewing privatization agreements. At each location, we interviewed responsible agency personnel and reviewed applicable policies, procedures, and documents. To measure implementation progress and assess issues associated with privatizing military housing, we reviewed DOD’s and the services’ implementation plans, compared the plans to progress made, and explored reasons for differences. We discussed potential barriers and concerns about the privatization initiative with DOD and service officials to obtain their views and to determine how they were dealing with the concerns. We also reviewed estimated cost savings from the initiative and examined the assumptions and estimates the services used in preparing life-cycle cost analyses for proposed privatization projects at Fort Carson, Colorado, and Lackland Air Force Base, Texas. In addition, we visited Navy privatization projects at Corpus Christi, Texas, and Everett, Washington, that were implemented under a previous initiative to test the use of limited partnerships to improve housing in the Navy. At each site, we toured the new housing units, reviewed occupancy statistics and rental costs, and discussed with local service officials their views of the initiative. To determine whether the new initiative is being integrated with other elements of DOD’s housing program, we reviewed DOD’s and the services’ housing policies, programs, initiatives, and plans. We also examined previous reports and studies related to military housing issues, reviewed DOD and service housing organization and management structures, and discussed the need for well-integrated housing plans with DOD, service, and OMB officials. We conducted our review between June 1997 and March 1998 in accordance with generally accepted government auditing standards. Initially optimistic about how quickly the new privatization authorities could solve the housing problem, DOD officials now recognize that implementation will be slower than expected. For a variety of reasons, final privatization agreements have not been signed for any proposed housing projects initiated since the authorities were signed into law in February 1996. DOD officials believe that progress may speed up after the first few projects are approved; however, each project is unique and will require individualized planning and negotiation. In 1997, DOD revised its initial goal for solving the DOD housing problem in 10 years by delaying the target 4 years, to fiscal year 2010. Other issues, such as potential savings from privatization, risks associated with long-term privatization agreements, and use of the authorities to improve barracks, are also of concern and will require continued monitoring and attention from DOD management. In May 1995, DOD first announced its proposal to use private sector financing and expertise to improve military housing. In a May 8, 1995, press release, DOD stated that the quality of military housing had declined for many years because of a lack of priority and because earlier attempts at solutions ran into regulatory or legislative roadblocks. However, with congressional approval of new authorities to acquire help from the private sector, DOD stated that its 30-year housing problem could be solved in 10 years. DOD officials repeated this claim in subsequent testimony before several congressional committees. Anticipating congressional approval of the initiative, the Secretary of Defense established a fiscal year 1996 goal to use the new authorities to execute projects affecting at least 2,000 family housing units and 2,000 barracks spaces. During congressional hearings in March 1996 and 1997, DOD officials stated that about 8 to 10 projects with up to 2,000 family housing units should be awarded within the next year and that the goal was to increase the number of units planned for construction and revitalization to 8,000 in fiscal year 1997 and to 16,000 units in fiscal year 1998. This planned ramp-up would have to actually occur and continue for DOD to solve its housing problem within the initial 10-year time frame, by fiscal year 2006. Although the initial goals were aggressive and DOD actively pursued implementation of the new initiative, progress has been slow. Since the authorizing legislation was signed in February 1996 through the end of February 1998, no new agreements were finalized to build or renovate military housing. In January 1998, DOD was actively considering more than a dozen projects for privatization and many others were in the early planning stages. However, only one, Lackland Air Force Base, apparently is close to contract signing, which is the beginning point for implementing housing improvements. Appendix II shows details of the projects being considered in January 1998. DOD officials often point to two Navy projects as the first examples of improvements under the initiative. The projects—404 new units at Corpus Christi, Texas, and 185 new units at Everett, Washington—were constructed off base on private property under limited partnership agreements between the Navy and private developers. However, the authority for these projects was not the legislation that established the initiative, but was legislation approved in October 1994. This legislation gave only the Navy authority to test the use of limited partnerships in order to meet the housing requirements of naval personnel and their dependents. Appendix III provides details on the Navy’s limited partnership agreements at Corpus Christi and Everett. The proposed Fort Carson privatization project illustrates the slow progress in implementing an agreement under the initiative. In October 1993, the Army requested $16.5 million in Milcon funds to replace 142 family housing units at Fort Carson in fiscal year 1995. The project was approved, but construction did not begin because the Army became interested in leveraging the funds through privatization to finance a much larger housing improvement effort at the installation. A HRSO team visited Fort Carson in December 1995, and after study and analysis, concluded in June 1996 that government housing at Fort Carson was a good candidate for privatization. The proposed privatization project—to construct 840 new family housing units, revitalize 1,824 existing units, and operate and maintain all of the units for a 50-year term—was approved in August 1996, and the request for proposal was issued in December 1996 for offers from the private sector to accomplish the requirements of the project. A contractor was selected in July 1997, and final negotiations began prior to contract award. On February 10, 1998, DOD notified the Congress of the Army’s intent to transfer Fort Carson’s 1995 Milcon appropriation into the DOD Family Housing Improvement Fund and to award the contract. However, in April 1998, as the result of litigation, the Army decided to cancel the proposed award, reexamine the acquisition process for the Fort Carson project, and study corrective action alternatives. Although DOD officials did not estimate when these steps will be completed or when the project will again be ready for contract award, DOD estimated an additional 4 to 5 years will be needed after the award to finish construction and revitalization of the housing itself. In July 1997, DOD revised its target date for solving its housing problem when it issued planning guidance for fiscal years 1999 through 2003. The guidance directed the military services to plan to revitalize, divest through privatization, or demolish inadequate family housing by or before fiscal year 2010, 4 years later than the original target. According to DOD officials, privatization implementation has been slower than expected primarily because the initiative represents a new way of doing business for both the military and the private sector. Initially, HRSO had to develop protocols for site visits and new tools and models to assess the financial feasibility of using the various authorities to help solve the housing problem at an installation. Then, as detailed work began on developing potential projects, many legal issues had to be addressed relating to the applicability of the Federal Acquisition Regulations and the Federal Property Regulations to the projects. Also, new financial and contractual issues had to be resolved such as establishing loan guarantee procedures to insure lenders against the risk of base closure, downsizing, and deployment; developing a process to provide direct loans to real estate developers; and creating documents for conveying existing DOD assets to developers. HRSO officials stated that the process has been slow to obtain concurrence on the details of the proposed Fort Carson and Lackland project agreements from lawyers representing the government, the developers, and the potential lenders. In addition, the officials noted that because the initiative has had high visibility both within and outside of DOD, much care and attention were devoted to ensuring that no mistakes were made as the initial agreements were developed. However, once the first one or two deals are completed, the officials believe that subsequent deals should proceed much faster. Another factor that slowed implementation was initial disagreement between DOD and OMB on how projects that used the various authorities should be scored. Discussions between the agencies continued for several months until a written agreement was adopted on June 25, 1997, which provided detailed scoring guidance applicable to the first 20 privatization projects. After these projects are completed, the agreement will be reviewed to determine whether any changes are needed. Privatization allows DOD to address its military housing problem more quickly by securing private sector financing of housing improvements. However, whether privatization also saves the government money in the long term, and if so, how much money are questions that have not been answered. Under traditional Milcon financing of military housing, the military pays the initial housing construction or renovation costs and then pays the annual costs to operate, maintain, and manage the units. The military does not pay monthly housing allowances since occupants of the units forfeit their allowances when living in government-owned housing. Under most proposed privatization projects, the military initially uses some funds to secure an agreement with a private developer and then pays monthly housing allowances to the servicemembers that occupy the housing, since the housing is not government-owned. The servicemembers use their housing allowances to pay rent to the developer. In addition, under most privatization options, the military continues to pay some housing management costs for servicemember referral services and for contract oversight. Thus, although the exact budgetary consequences from use of the various privatization authorities are not known, it appears that privatization largely results in a shift in funding from military housing construction, operations, and maintenance accounts to military personnel accounts to pay for additional housing allowances. Performing accurate cost comparisons between privatization and Milcon alternatives is difficult because the comparisons involve many variables and assumptions. However, one key issue in the comparisons is whether the housing under each alternative is the same. To illustrate, developers of projects under the initiative might use local building practices and standards to construct or revitalize housing that may be different in size and amenities from that constructed under Milcon building standards and specifications. For example, HRSO officials stated that privatized housing for married junior enlisted personnel based on local standards may result in garden-style apartments with no carports or garages. Normally, Milcon housing for married junior enlisted personnel results in larger townhouse type units with a carport or a garage. Because of such differences, a cost comparison between Milcon and privatization alternatives may not always result in an analysis of comparable housing. HRSO and service analyses of potential privatization projects have primarily focused on the financial feasibility of the deals. In other words, the analyses attempt to determine whether deals can be made that are attractive to developers while still meeting DOD’s leveraging goals. HRSO did not initially focus on comparing the long-term or life-cycle costs of a potential privatization project with the costs to perform the same project using traditional Milcon financing. Nevertheless, prior to finalizing a privatization agreement, the services perform a life-cycle analysis comparing project costs using both alternatives. HRSO, however, has not provided guidance for how these analyses should be completed, including what costs to consider and what assumptions to use. As a result, the services’ life-cycle analyses may not be prepared consistently and may use assumptions and estimates that do not result in reliable cost comparisons. HRSO officials stated that this is a concern and that they have tasked a consultant to develop a standard methodology for performing the analyses. Although milestones and a specific implementation date have not been established, HRSO officials stated that the services will be required to use the standardized methodology when it is completed and approved by HRSO. We reviewed the services’ life-cycle cost analyses for two proposed privatization projects at Fort Carson and Lackland Air Force Base to compare estimates of the government’s long-term costs for housing financed with Milcon funds and through the privatization initiative. The Fort Carson analysis, which was included in the February 10 congressional notification of DOD’s intent to enter into the Fort Carson agreement, estimated that over the 50-year term of the agreement privatization will cost about $197 million, or 24 percent, less than Milcon. The Lackland Air Force Base analysis, which might be revised before the contract is awarded, estimated that privatization will cost about $42 million, or 29 percent, less than Milcon. In our review, we made adjustments to the services’ analyses because some project costs had been excluded, some cost estimates were not based on actual budgeted amounts, and the 1998 OMB discount rate was not used to adjust for the time value of money. We made no adjustments for possible differences in the size or amenities of the housing resulting from each alternative. As shown in table 2.1, our review showed that although privatization was less costly for each project, the overall estimated cost savings to the government were considerably less than the services’ estimates—about $54 million less, or about 7 percent, at Fort Carson and $15 million less, or about 10 percent, at Lackland. Appendixes IV and V provide details on the assumptions used in DOD’s and our review of Fort Carson and Lackland life-cycle cost estimates. With no other cost comparisons to review at this time and with each future privatization agreement having unique circumstances and costs, it is difficult to draw conclusions on the extent of cost savings available from the privatization initiative. However, Army and Air Force officials have stated opinions that long-term savings to the government through the privatization initiative may be minimal. For example, an Army housing official stated that, although privatization can help solve the Army’s housing problems faster than Milcon, privatization does not significantly reduce the Army’s total costs because reduced family housing costs are offset by higher personnel costs, which are used to pay for additional housing allowances. Also, the Air Force completed a hypothetical analysis comparing life-cycle costs to revitalize 670 family housing units through privatization and Milcon. The analysis showed that there would be less than a $1-million difference between the two alternatives in total costs to the government over the life of the project. According to DOD officials, most of the potential privatization projects now under consideration call for long-term agreements between DOD and the developers. Many proposed deals are for 50 years with an option for another 25 years. HRSO officials stated that long-term agreements are needed to make the proposed projects financially feasible by providing a long-term cash flow to cover the developers’ investment costs for new construction and revitalization. To illustrate, the privatization proposal for Fort Carson calls for a whole-base deal in which the developer will revitalize existing units, construct new units, and operate and maintain all units for 50 years. Land related to the project will be leased to the developer. At the end of the 50-year term, providing that the government does not exercise an option to extend the agreement for another 25 years, the developer will be required to vacate the premises and may be required to remove the housing. The developer is expected to invest about $220 million to construct and revitalize the units and will recoup this cost, as well as the operating and maintenance costs, excluding utilities, from the rents paid by the occupants over the term of the agreement. Military families have first preference in renting the units and will pay rent equal to the members’ housing allowances. If military families do not rent the units, the units can be rented to civilians. DOD plans to provide a loan guarantee for funds that the developer borrows to construct and revitalize the units. However, the loan guarantee only covers the risks of base closure, deployment, and downsizing. In the event of a base closure default, the government could be obligated to pay off the loan and assume ownership of the project for disposal. Long-term privatization agreements present several concerns that require careful consideration. For example, before the military invests in a long-term housing project through a privatization agreement or traditional Milcon funding, the military should know with a high degree of certainty the installation’s future housing needs. To do this, the military must first determine whether the installation will be needed in the future; specifically, whether the installation is a likely candidate for closure during any future reductions in military infrastructure. If the installation is predicted to be needed, then the military must forecast (1) the installation’s future mission, military population, and family housing requirement; (2) future private housing availability and affordability in the local community; and (3) future military family housing preferences for on-base or off-base housing. According to several service officials, making accurate forecasts of these variables beyond a 3- to 5- year period cannot be assured. Yet, without long-term assurance that the privatized housing will be needed, risks increase that the service will not need all, or any, of the housing over the term of the agreement. Another concern associated with long-term privatization agreements is the potential for poor performance or nonperformance by the contractors. A major concern, particularly for on-base privatization projects, is whether contractors will perform housing repairs, maintenance, and improvements in accordance with the agreements. Although maintenance standards, modernization schedules, required escrow accounts, and other safeguards will be included in the agreements, enforcing the agreements could be difficult, time-consuming, and costly. In an April 1997 report on the privatization initiative, the Center for Naval Analyses discussed concerns with long-term agreements. The report noted that when rents are fixed at levels other than market rates, such as in the proposed Fort Carson and Lackland agreements where rent equals a member’s allowance, a contractor has little economic incentive to maintain the property. The contractor can increase profits by limiting maintenance and repairs and can cut costs by hiring less qualified managers and staff and using inferior supplies. In short, under fixed-rent arrangements, contractors may have an incentive to cut services in ways that, although difficult to predict, could erode the quality of life for servicemembers. The report also noted that long-term agreements contain disincentives that can occur late in the agreement. For example, the report stated that the long-term financial incentive for the developer during the last 20 years may be to disinvest so that the value of the physical assets foregone at the end of the term has been drained by use. Further, if the value of the units declines and military families do not rent the units, the potential exists for civilians to move on base, paying lower rents and creating an on-base slum. Privatization agreements provide for civilians renting the housing units if they are not rented by military families. Long-term agreements increase the potential that civilians will eventually live on base. For example, over a period of years, housing allowances could increase and more community housing could become available, making it more likely that military families would choose to live off base. In this circumstance, the contractor could rent vacancies to civilians. In some locations, installation commanders may welcome civilians living on base. However, in other locations, the civilians may not be welcomed. Marine Corps officials stated that most Marine installation commanders did not want civilians living on base because of security reasons and because of the tradition of having a military housing community available to members and their families. In addition to possible security concerns at some installations, the prospect of civilians living on base also raises some questions that have not been fully answered. For example, if civilians rented privatized housing units on base, would the government be required to pay education impact aid to the community for each civilian child, and would law enforcement responsibilities be more complicated because both local community and base police could be involved in matters related to on-base civilian tenants? According to DOD and service officials, there would be little financial advantage in using privatization to improve unaccompanied housing. The primary problem lies in the services’ mandatory assignment policies for single junior enlisted personnel and the budgetary scoring impact from mandatory assignments. The current policy in each service requires mandatory assignment of single junior enlisted members to the barracks, providing that space is available. According to DOD officials, most military leaders strongly support this policy because they believe that such assignment provides for military discipline and unit integrity. However, in accordance with the guidance established for recording obligations under the privatization initiative, mandatory assignment of military personnel to privatized housing constitutes an occupancy guarantee, which results in a government obligation to pay rental costs over the entire term of the agreement. Thus, when a privatization project includes an occupancy guarantee, DOD must set aside funds to cover the value of this guarantee up front. Because the funds required to cover the guarantee could approximate the amount of funds required under traditional military construction financing, funding for a privatized barracks project would not meet DOD’s goal to have the private sector to invest at least $3 for each dollar the government invests. This issue is not a problem in military family housing because mandatory assignments normally are not made. In most cases, married members in all paygrades can decline government-owned housing, if available, and decide where they want to live. Because installation commanders do not appear to be willing to change the current policy regarding barracks assignments, privatizing unaccompanied housing does not appear to be a financially viable alternative. An additional barrier to privatized barracks cited by DOD officials is a lack of funding to pay for increased housing allowances. If privatized, occupants would begin receiving housing allowances to pay rent. In family housing, there is a separate budget account for housing operations and maintenance that can provide a funding source to help pay for increased allowances. However, barracks operations and maintenance is not funded by a similar separate account. Instead, barracks operations and maintenance is funded from the overall base operating budget. According to service officials, this budget account often is underfunded and, therefore, does not have sufficient funds to help pay for increased housing allowances for unaccompanied personnel. DOD officials are aware of the issues discussed in this chapter. Since the beginning of the privatization initiative, DOD has attempted to speed implementation and address issues associated with potential savings from privatization, risks associated with long-term agreements, and using the authorities to improve barracks. However, the initiative does represent a new way of doing business, and DOD has been appropriately deliberate in its implementation. As implementation continues and progress is made, continued management attention can help ensure that benefits from the initiative are realized, potential risks are minimized, and program changes are adopted when needed. DOD has already recognized the need to standardize the methodology the services use in preparing life-cycle analyses comparing costs of privatization and Milcon alternatives. However, DOD needs to ensure that a standardized methodology is developed and implemented as quickly as possible. Without a standard methodology, DOD officials cannot be assured that the services’ estimates of cost differences between the two alternatives for proposed projects are consistent and reliable. We recommend that the Secretary of Defense expedite HRSO’s effort to develop a standardized methodology for comparing life-cycle costs of proposed privatization projects with military construction alternatives. This action should include establishing and monitoring milestones for the development and implementation of the methodology. In commenting on a draft of this report, DOD noted that the time it has taken to initiate the program was appropriate and necessary to resolve critical program issues that will ensure timely and effective implementation of all subsequent projects. DOD stated that proceeding more rapidly would have created major long-term problems for the program. We are not suggesting that DOD should have moved more quickly to implement this new program. As noted in our conclusion, the initiative does represent a new way of doing business, and many issues needed to be resolved. Our intent was simply to factually report on the program’s implementation. The fact remains that DOD established several goals for the program, which included having 16,000 units planned for construction by fiscal year 1998. To date, no units have been constructed or revitalized and it is unlikely that any units will be before the end of fiscal year 1998. Moreover, although DOD expects implementation to accelerate once it completes its first projects, we believe it is important to recognize that each proposed project will come with its own unique circumstances and that it may be unrealistic to assume that the program can be greatly accelerated. With respect to life-cycle cost comparisons of family housing construction under Milcon and privatization, DOD noted that although our life-cycle cost savings estimates under privatization for Fort Carson and Lackland Air Force Base are less than DOD’s estimates, the savings are still significant. DOD also stated that the type of economic analysis by which we made its estimates between the two alternatives tend to obscure an important underlying reality—that family housing military construction funding is not available at the levels estimated in the comparisons. We did not develop a unique cost analysis for these two projects but rather used the cost analysis developed by the military services. As we stated in our report, we made adjustments to the services analyses because some project costs had been excluded, some cost estimates were not based on actual budgeted amounts, and the 1998 OMB discount rate was not used to adjust for the time value of money. Also, the fact that DOD has required a life-cycle cost comparison for all proposed projects suggests that it, too, believes that life-cycle cost analyses are necessary to accurately evaluate housing alternatives. Without a standardized life-cycle cost methodology in place, privatization projects could be undertaken that are more costly than other alternatives. Our intent was simply to provide an independent life-cycle cost analysis as a check against the DOD estimates. Lastly, we agree that military construction funds would not likely be available at the same levels available under privatization since the privatization initiative seeks to leverage government funds by a minimum of three to one. DOD partially concurred with our recommendation regarding a standardized methodology for comparing life-cycle costs of proposed projects under both alternatives. DOD agreed that a consistent presentation of life-cycle cost comparisons is desirable and necessary and stated that it is developing such a standard methodology for application to all future projects. Our recommendation, however, was aimed at expediting this effort, and DOD did not provide specific information concerning a schedule for developing and implementing its standardized methodology. In view of the large number of projects that the services have proposed for privatization, we believe that adopting a standardized methodology as soon as possible is important. Use of a standardized methodology across service lines would provide a consistent way of comparing costs and permit more informed decisions about the relative merits of housing alternatives. The privatization initiative is only one of several tools, including housing allowances and traditional military construction, available to meet the housing needs of servicemembers and their families. To be most effective, the initiative needs to be integrated with the other tools and elements of an overall housing strategy. For example, to maximize the advantages from the initiative and minimize total housing costs, privatization needs to be part of a strategy that ensures (1) accurate determinations of housing needs and the ability of the local communities to meet these needs at each installation, (2) maximum use of private sector housing in accordance with DOD housing policy, and (3) coordinated decisions on the structure of housing allowances and housing construction. Although DOD and the services have tended to view and manage these elements separately, rather than as part of a well-integrated strategy, DOD has recently taken some steps to improve planning for eliminating inadequate family housing. However, to optimally address housing needs, additional steps can be taken to develop comprehensive plans that integrate all elements of DOD’s housing program. Foundational to an integrated housing plan is a process that accurately determines the services’ housing needs and the ability of the local communities to meet those needs at each installation. Accurate requirements analyses can help ensure that government housing, whether Milcon or privatized, is provided only at installations where the local communities cannot meet the military’s family housing needs, as specified by DOD policy. However, our prior work and the work of others have found significant, long-standing problems in the processes the services use to determine their housing requirements. For example, in our 1996 report on military family housing, we noted that DOD and the services relied on housing requirements analyses that (1) often underestimated the private sector’s ability to meet family housing needs and (2) used methodologies that tended to result in a self-perpetuating requirement for government housing. Our 1996 evaluation of the housing requirements analyses for 21 installations showed that methodology problems understated the ability of the private sector to meet military needs at 13 of the installations. The Congressional Budget Office, the Center for Naval Analyses, and others have reported similar problems with the services’ housing requirements determination processes. In our report, we recommended that DOD revise the housing requirements determination process by considering the results of an on-going DOD Inspector General’s review of the services’ requirements processes. In response to our recommendation, DOD stated that it would consider the results from the Inspector General’s review and would implement accepted recommendations. The Inspector General’s report was issued in October 1997. The report stated that “DOD and Congress do not have sufficient assurance that current family housing construction budget submissions address the actual family housing requirements of the Services in a consistent and valid manner.” The Inspector General recommended developing a DOD standard process and standard procedures to determine family housing requirements. In response, DOD officials stated that a working group, including representatives from each service, was convened in December 1997 to address the problems in the housing requirements determination process. However, milestones for the working group and for implementing improvements to the requirements process had not been developed at the time of our review in March 1998. Integrated housing plans founded on accurate requirements determinations can help ensure implementation of DOD’s policy of relying first on existing private sector housing to meet the military’s family housing needs. Implementation of this policy has been the most economical form of privatization. When servicemembers are paid housing allowances and families live in suitable private housing in local communities, the government’s cost for housing is minimized and the military is effectively out of the housing business. To illustrate, in our 1996 housing report, we compared the government’s costs to provide housing for a military family in government-owned and private sector family housing units in fiscal year 1995. The comparison showed that the government spent an average of $4,957 less for each family that lived in private sector housing. The difference resulted because a typical family living in private housing paid $2,016 of its housing costs out of pocket and the government paid $1,416 less in education impact aid because private housing is subject to local taxes. The remaining amount represented the estimated difference in the annual cost of a housing unit constructed, operated, and maintained by the military and a unit constructed, operated, and maintained by the private sector. There are other advantages to relying on private housing. In the current environment of constrained defense budgets and DOD’s requests for future rounds of base closures, the short-term flexibility offered by maximum use of private sector housing appears preferable to the long-term commitments required by Milcon and most privatization agreements. Existing private sector housing also can offer military members a greater selection of housing options to fit their needs instead of limiting them to what is available in military housing. The services have not always maximized use of existing private sector housing in accordance with the DOD policy. In our 1996 report, we stated that the communities surrounding many military installations could meet thousands of additional family housing needs. For example, the Army reported in 1996 that over 34,000 government family units at 59 Army installations were occupied but were considered surplus—meaning that the communities near these installations had affordable housing available that could meet these requirements. Similarly, the Air Force reported that over 4,000 government units at 13 Air Force installations were surplus. The Navy and the Marine Corps did not accumulate comparable housing information on their installations. We are not suggesting that the scope of any planned privatization project is not justified. A sufficient quantity of affordable private sector housing is not available at many U.S. military installations. We do believe, however, that long-range, integrated housing plans can provide the focus needed to ensure that maximum use is made of civilian housing before new investments are made in military housing. In particular, when local communities can meet additional military family housing requirements, this focus can ensure that government housing units are closed when the units reach the end of their economic life rather than renovated or replaced through Milcon or privatization. By ensuring that privatization authorities are used only where needed, the military’s risks are reduced and costs are minimized for incentives to private developers, education impact aid, on-base housing utilities, and police and fire support for on-base housing. Providing comprehensive housing referral services to servicemembers has proven to be an effective means of promoting greater use of existing private sector housing. Effective referral services that result in placing more military families in suitable private sector housing could reduce the need for new construction, whether it is accomplished through privatization or Milcon. DOD policies currently require each installation to assist servicemembers and their families in finding suitable private housing in the local communities. The Navy, however, has adopted a more aggressive, or enhanced, approach to housing referrals to help families find suitable housing. According to Navy officials, the Navy has pursued enhanced housing referral services since 1994. Under this approach, Navy housing officials work with local landlords and apartment managers to obtain preferences for military families such as reduced rental rates, waiving of some rental fees and deposits, and unit set-asides in which certain vacancies are offered to military families before they are offered to civilians. For example, Navy housing officials at Everett, Washington, stated that they had signed agreements with 39 housing complexes that had resulted in providing affordable housing for about 350 servicemembers and their families. This approach also includes welcome centers, which servicemembers can visit to obtain detailed information on area housing and receive personal assistance in securing suitable housing. DOD officials stated that the Navy’s approach has been successful and probably would be beneficial if adopted by the other services. However, the officials stated that there is no current initiative to implement enhanced referral programs in the other services. Long-range, integrated plans can emphasize the use of housing allowances as a key tool in addressing the military’s housing problem. Adequate housing allowances can help military members and their families secure suitable housing in the local communities, reducing the need for on-base housing. However, because use of allowances to address the services’ housing problems can directly affect the use of other tools, such as privatization and military construction, coordination is required to manage the impacts. This coordination has not always occurred. In some cases, DOD initiatives relating to housing allowances and to construction and management of military family housing have been viewed and managed separately rather than in combination to achieve a synergistic impact. One reason for this is that separate DOD organizations manage these two key components of the family housing program. Housing allowances are the responsibility of the Under Secretary of Defense for Personnel and Readiness and primarily are managed centrally at DOD headquarters by the organization responsible for all compensation issues, including basic pay and other types of allowances. Appropriations for housing allowances are included in the services’ military personnel accounts. Construction, management, and privatization of military housing are the responsibility of the Under Secretary of Defense for Acquisition and Technology. This organization establishes overall DOD housing policy and delegates primary housing management responsibility to the individual services, their major commands, and individual installations. This organization is responsible for most housing initiatives, including the Milcon program and the privatization initiative. Appropriations for the family housing program are included in the services’ military construction and family housing accounts. DOD officials stated that the two organizations work together and coordinate on matters relating to housing allowances. However, each organization is responsible for its own initiatives, and an overall strategy has not been developed to ensure optimum integration of all initiatives. For example, when a new housing allowance program was developed, there was little discussion between the two organizations on how the program would affect the privatization initiative, even though allowance changes could affect not only the affordability of existing private sector housing but also privatization agreements where rental rates are equal to the servicemembers’ housing allowances. The Congress approved DOD’s request for major changes to DOD’s housing allowance program starting in 1998. Allowances in the future will be based on average housing costs in each geographic area. As the new program is phased in over a 6-year period, DOD officials stated that allowances in high-cost areas are expected to increase and allowances in low-cost areas are expected to remain constant or decline. However, largely because of limited coordination between the DOD offices responsible for housing allowances and housing management, service officials told us that they were uncertain how the new allowance program would affect the privatization initiative, but they did voice some concerns. For example, some officials questioned how a contractor under a privatization agreement would respond if rents were based on housing allowances and the allowances declined. Conversely, if allowances increased significantly, rental payments to the contractor could increase significantly, creating the potential for windfall profits. Also, Marine Corps housing officials stated that if housing allowances increased, the need for on-base housing could decrease. For example, with larger allowances some occupants of privatized on-base housing might move to community housing, leaving on-base vacancies available to civilians. This view appears to be supported by a 1997 study on military housing issues by the RAND organization. The study stated that the primary reason military families choose on-base housing is economics. Many families, particularly in lower paygrades, believe that the value of the on-base housing exceeds the amount of allowance they forfeit to live there. RAND also found that few families would choose to live in on-base housing if their housing allowances would permit them to obtain suitable housing in the community without considerable out-of-pocket costs. In November 1997, DOD took an important step in its planning by directing the services for the first time to submit plans for eliminating inadequate family housing. The services were directed to submit plans by May 1, 1998, that identify by installation, housing revitalization requirements and the potential for privatization. Although this information could be an important first step in developing comprehensive housing plans, DOD did not provide written guidance for the services to use in determining revitalization requirements and privatization potential. Further, comprehensive plans that integrate not only privatization but also other elements of DOD’s housing program are needed. The DOD direction to the services did not require the plans to include steps to improve the requirements determination processes; maximize existing private sector housing; develop enhanced referral services; or coordinate use of allowances, military construction, and privatization options. On its own initiative, the Air Force appears to have recognized the need for comprehensive, integrated housing plans. Air Force officials stated that privatization alone will not solve the housing problem and that the ultimate solution lies within an integrated approach. With this in mind, in August 1997, the Air Force began working with consultants to develop a housing master plan for each Air Force installation. Air Force officials stated that the master plans will define the most effective housing investment strategy by integrating construction, operations and maintenance revitalization, privatization, and reliance on local community housing. The Air Force expects the first portion of the plans, potential for privatization, to be completed by May 1998 to comply with the DOD direction. The overall master plans are expected to be completed in December 1998. DOD needs to ensure that the services develop long-range, integrated housing plans that rely on and use all of the available tools; not in isolation, but in a coordinated, optimum manner. Such plans can provide the focus needed to ensure accurate housing requirements determinations, maximum use of suitable civilian housing, use of enhanced housing referral services, coordination of housing allowances changes, and appropriate use of privatization and Milcon alternatives. DOD has taken some initial steps to developing better planning. However, additional steps can be taken to ensure that the military’s housing problems are addressed in an optimum manner. Also, achieving a more integrated approach has been somewhat hampered by separate DOD organizations responsible for housing allowances and housing construction and management. Although there may be valid reasons for keeping these functions in separate offices, greater efforts are needed to ensure effective coordination on housing issues. We recommend that the Secretary of Defense expand the directive to the services concerning their plans for eliminating inadequate housing. Specifically, the Secretary should direct the services to prepare detailed, integrated housing plans that will (1) describe their plans for improving their housing requirements determination processes, (2) demonstrate how they will attempt to maximize their reliance on community housing in accordance with DOD’s stated policy, and (3) outline improvements in housing referral services. The plans should also include analyses of the estimated impact of the new housing allowance program on the availability of housing in local communities and show how housing allowances, traditional military construction, and the privatization initiative will be used in concert to meet DOD’s housing needs in the most economical manner. Each service plan should include estimated milestones for achieving the goals of the plan. We also recommend that the Secretary establish a mechanism to promote more effective coordination between the offices responsible for housing allowances and housing management. DOD partially concurred with our recommendations, stating that it is aware of the need to integrate effectively all the elements of its housing program. DOD said that the services already have begun making specific plans about whether privatization or military construction should be pursued at each installation. DOD also stated that development of standard procedures for determining housing requirements is the subject of a working group first convened in December 1997, which includes representatives from each service. However, DOD stated that the legislative authority for privatization expires in 2001, unless extended or made permanent. As a result, DOD stated its immediate focus is on demonstrating at several prototype sites how housing privatization can be an effective tool for addressing DOD’s housing problems. DOD stated that only after succeeding in this demonstration and in a demonstration of how other new elements of DOD’s housing program can succeed will its focus turn toward the imperative of better integrating these new tools with all the other aspects of its housing program. Although the steps DOD outlined are positive, we do not agree that DOD should wait until it can demonstrate successes in its privatization program before focusing on integrating the elements of its overall housing program. Better integration of housing elements is needed now to maximize the advantages of the initiative and ensure that housing is revitalized or constructed only at installations where the local communities cannot meet the military housing requirements. Also, we believe all the elements of our recommendation are important aspects of such integration. While DOD mentioned steps related to certain elements, it did not directly address those aimed at promoting maximum reliance on community housing. For example, DOD did not state whether it would encourage the services to improve their housing referral programs or analyze the impact of the new housing allowance program on available housing in the community before proceeding to revitalize or build new housing. We continue to believe that DOD should expand the directive to the services concerning development of installation housing plans to cover each element of our recommendation to help ensure maximum reliance on community housing. An expanded directive to the services in preparing their housing plans would help focus the services’ attention on how they can use the full range of tools available to them in concert to address their housing problems in the most economical way. Moreover, requiring the services to set milestones for achieving their housing goals would help DOD measure progress in improving military housing. DOD also agreed that coordination between the offices responsible for housing allowances and housing management is necessary and appropriate but stated existing coordination mechanisms are adequate and effective. We believe that improved coordination during development and implementation of the new allowance program could have increased understanding of how the new program will affect the privatization initiatives. As noted in our report, service officials were uncertain about the relationship of the two initiatives. Moreover, as we pointed out, the new program could result in making more local community housing affordable to military families, thus reducing the need for privatized housing in some locations. We continue to believe that the relationship between military housing and allowances is extremely close, improved coordination between the offices responsible for these issues can lead to a more integrated approach to housing, and new mechanisms are needed to help achieve this improved coordination.
GAO reviewed the Department of Defense's (DOD) Military Housing Privatization Initiative, focusing on DOD's efforts to: (1) measure progress to date; (2) assess issues associated with privatizing military housing; and (3) determine whether the new initiative is being integrated with other elements of DOD's housing programs. GAO noted that: (1) DOD considers privatization to be a powerful new tool to help address the military housing problem; (2) two years have passed since the new authorities were signed into law, yet no new agreements have been finalized to build or renovate military housing; (3) more than a dozen projects are being considered; however, only one project is close to contract signing; (4) according to DOD, progress has been slower than expected because the initiative represents a new way of doing business for both the military and the private sector; (5) many legal, financial, contractual, and budgetary scoring issues had to be resolved to the satisfaction of parties representing the government, developers, and private lenders; (6) although DOD expects implementation to speed up after the first few privatization deals are completed, it is difficult to predict how much the program can be accelerated given the unique circumstances of individual projects; (7) in addition to potential benefits, implementation of the privatization initiative raises several concerns; (8) one concern is whether privatization will result in significant cost savings; (9) to a large degree, privatization shifts funding from military housing construction, operations, and maintenance accounts to military personnel accounts to pay for increased housing allowances used to pay rent to developers of privatized housing; (10) GAO's review of the services' life-cycle cost analyses for two privatization projects disclosed that the difference in the cost of privatization and traditional military construction financing was considerably less than the services' estimates and relatively modest; (11) the privatization initiative has not been fully integrated with other elements of an overall housing strategy to meet DOD's housing needs in an optimum manner; (12) comprehensive housing referral services could lessen the need for government housing, yet only the Navy has aggressively pursued this option; (13) better coordination between the separate offices responsible for housing allowances and military housing construction and management could ensure that their decisions on housing matters are made in concert, rather than in isolation, with each other; and (14) comprehensive, better integrated plans could tie together the elements of DOD's housing program and help maximize the advantages of the privatization initiative while minimizing total housing costs.
The GPD program is one of six housing programs for homeless veterans administered by the Veterans Health Administration, which also undertakes outreach efforts and provides medical treatment for homeless veterans. VA officials told us in fiscal year 2007 they spent about $95 million on the GPD program to support two basic types of grants—capital grants to pay for the buildings that house homeless veterans and per diem grants for the day-to-day operational expenses. Capital grants cover up to 65 percent of housing acquisition, construction, or renovation costs. The per diem grants pay a fixed dollar amount for each day an authorized bed is occupied by an eligible veteran up to the maximum number of beds allowed by the grant—in 2007 the amount cannot exceed $31.30 per person per day. VA pays providers after they have housed the veteran, on a cost reimbursement basis. Reimbursement may be lower for providers whose costs are lower or are offset by funds for the same purpose from other sources. Through a network of over 300 local providers, consisting of nonprofit or public agencies, the GPD program offers beds to homeless veterans in settings free of drugs and alcohol that are supervised 24 hours a day, 7 days a week. Most GPD providers have 50 or fewer beds available, with the majority of providers having 25 or fewer. Program rules generally allow veterans to stay with a single GPD provider for 2 years, but extensions may be granted when permanent housing has not been located or the veteran requires additional time to prepare for independent living. Providers, however, have the flexibility to set shorter time frames. In addition, veterans are generally limited to a total of three stays in the program over their lifetime, but local VA liaisons may waive this limitation under certain circumstances. The program’s goals are to help homeless veterans achieve residential stability, increase their income or skill levels, and attain greater self-determination. To meet VA’s minimum eligibility requirements for the program, individuals must be veterans and must be homeless. A veteran is an individual discharged or released from active military service. The GPD program excludes individuals with a dishonorable discharge, but it may accept veterans with shorter military service than required of veterans who seek VA health care. A homeless individual is a person who lacks a fixed, regular, adequate nighttime residence and instead stays at night in a shelter, institution, or public or private place not designed for regular sleeping accommodations. GPD providers determine if potential participants are homeless, but local VA liaisons determine if potential participants meet the program’s definition of veteran. VA liaisons are also responsible for determining whether veterans have exceeded their lifetime limit of three stays in the GPD program and for issuing a waiver to that rule when appropriate. Prospective GPD providers may identify additional eligibility requirements in their grant documents. While program policies are developed at the national level by VA program staff, the local VA liaisons designated by VA medical centers have primary responsibility for communicating with GPD providers in their area. VA reported that in fiscal year 2007, there were funds to support 122 full-time liaisons. Since fiscal year 2000, VA has quadrupled the number of available beds and significantly increased the number of admissions of homeless veterans to the GPD program in order to address some of the needs identified through the its annual survey of homeless veterans. In fiscal year 2006, VA estimated that on a given night, about 196,000 veterans were homeless and an additional 11,100 transitional beds were needed to meet homeless veterans’ needs. However, this need was to be met through the combined efforts of the GPD program and other federal, state, or community programs that serve the homeless. VA had the capacity to house about 8,200 veterans on any given night in the GPD program. Over the course of the year, because some veterans completed the program in a matter of months and others left before completion, VA was able to admit about 15,400 veterans into the program, as shown in figure 1. Despite VA rules allowing stays of up to 2 years, veterans remained in the GPD program an average of 3 to 5 months in fiscal year 2006. The need for transitional housing beds continues to exceed capacity, according to VA’s annual survey of local areas served by VA medical centers. The number of transitional beds available nationwide from all sources increased to 40,600 in fiscal year 2006, but the need for beds increased as well. As a result, VA estimates that about 11,100 more beds are needed to serve the homeless, as shown in table 1. VA officials told us that they expect to increase the bed capacity of the GPD program to provide some of the needed beds. Most homeless veterans in the program had struggled with alcohol, drug, medical or mental health problems before they entered the program. Over 40 percent of homeless veterans seen by VA had served during the Vietnam era, and most of the remaining homeless veterans served after that war, including at least 4,000 who served in military or peacekeeping operations in the Persian Gulf, Afghanistan, Iraq, and other areas since 1990. About 50 percent of homeless veterans were between 45 and 54 years old, with 30 percent older and 20 percent younger. African-Americans were disproportionately represented at 46 percent, the same percentage as non- Hispanic whites. Almost all homeless veterans were men, and about 76 percent of veterans were either divorced or never married. An increasing number of homeless women veterans and veterans with dependents are in need of transitional housing according to VA officials and GPD providers we visited. The GPD providers told us in 2006 that women veterans had sought transitional housing; some recent admissions had dependents; and a few of their beds were occupied by the children of veterans, for whom VA could not provide reimbursement. VA officials said that they may have to reconsider the type of housing and services that they are providing with GPD funds in the future, but currently they provide additional funding in the form of special needs grants to a few GPD programs to serve homeless women veterans. VA’s grant process encourages collaboration between GPD providers and other service organizations. Addressing homelessness—particularly when it is compounded by substance abuse and mental illness—is a challenge involving a broad array of services that must be coordinated. To encourage collaboration, VA’s grants process awards points to prospective GPD providers who demonstrate in their grant documents that they have relationships with groups such as local homeless networks, community mental health or substance abuse agencies, VA medical centers, and ancillary programs. The grant documents must also specify how providers will deliver services to meet the program’s three goals—residential stability, increased skill level or income, and greater self-determination. The GPD providers we visited often collaborated with VA, local service organizations, and other state and federal programs to offer the broad array of services needed to help veterans achieve the three goals of the GPD program. Several providers worked with the local homeless networks to identify permanent housing resources, and others sought federal housing funds to build single-room occupancy units for temporary use until more permanent long-term housing could be developed. All providers we visited tried to help veterans obtain financial benefits or employment. Some had staff who assessed a veteran’s potential eligibility for public benefits such as food stamps, Supplemental Security Income, or Social Security Disability Insurance. Other providers relied on relationships with local or state officials to provide this assessment, such as county veterans’ service officers who reviewed veterans’ eligibility for state and federal benefits or employment representatives who assisted with job searches, training, and other employment issues. GPD providers also worked collaboratively to provide health care-related services—such as mental health and substance abuse treatment, and family and nutritional counseling. While several programs used their own staff or their partners’ staff to provide mental health or substance abuse services and counseling directly, some GPD providers referred veterans off site— typically, to a VA local medical center. Despite GPD providers’ efforts to collaborate and leverage resources, GPD providers and VA staff noted gaps in key services and resources, particularly affordable permanent housing for veterans ready to leave the GPD program. Providers also identified lack of transportation, legal assistance, affordable dental care, and immediate access to substance abuse treatment facilities as obstacles for transitioning veterans out of homelessness. VA staff in some of the GPD locations we visited told us that transportation issues made it difficult for veterans to get to medical appointments or employment-related activities. While one GPD provider we visited was able to overcome transportation challenges by partnering with the local transit company to obtain subsidies for homeless veterans, transportation remained an issue for GPD providers that could not easily access VA medical centers by public transit. Providers said difficulty in obtaining legal assistance to resolve issues related to criminal records or credit problems presented challenges in helping veterans obtain jobs or permanent housing. In addition, some providers expressed concerns about obtaining affordable dental care and about wait lists for veterans referred to VA for substance abuse treatment. We found that some providers and staff did not fully understand certain GPD program policies—which in some cases may have affected veterans’ ability to get care. For instance, providers did not always have an accurate understanding of the eligibility requirements and program stay rules, despite VA’s efforts to communicate its program rules to GPD providers and VA liaisons who implement the program. Some providers were told incorrectly that veterans could not participate in the GPD program unless they were eligible for VA health care. Several providers understood the lifetime limit of three GPD stays but may not have known or believed that VA had the authority to waive this rule. As a consequence, we recommended that VA take steps to ensure that its policies are understood by the staff and providers with responsibility for implementing them. In response to our recommendation that VA take steps to ensure that its policies are understood by the staff and providers with responsibility for implementing them, VA took several steps in 2007 to improve communications with VA liaisons and GPD providers, such as calling new providers to explain policies and summarizing their regular quarterly conference calls on a new Web site, along with new or updated manuals. Language on the number and length of allowable stays in the providers’ guide has not changed, however. VA assesses performance in two ways—the outcomes for veterans at the time they leave the program and the performance of individual GPD providers. VA’s data show that since 2000, a generally steady or increasing percentage of veterans met each of the program’s three goals at the time they left the GPD program. Since 2000, proportionately more veterans are leaving the program with housing or with a better handle on their substance abuse or health issues. During 2006, over half of veterans obtained independent housing when they left the GPD program, and another quarter were in transitional housing programs, halfway houses, hospitals, nursing homes, or similar forms of secured housing. Nearly one-third of veterans had jobs, mostly on a full-time basis, when they left the GPD program. One-quarter were receiving VA benefits when they left the GPD program, and one-fifth were receiving other public benefits such as Supplemental Security Income. Significant percentages also demonstrated progress in handling alcohol, drug, mental health, or medical problems and overcoming deficits in social or vocational skills. For example, 67 percent of veterans admitted with substance problems showed progress in handling these problems by the time they left. Table 2 indicates the numbers or percentages involved. VA’s Office of Inspector General (OIG) found when it visited GPD providers in 2005-2006 that VA officials had not been consistently monitoring the GPD providers’ annual performance as required. The GPD program office has since moved to enforce the requirement that VA liaisons review GPD providers’ performance when the VA team comes on- site each year to inspect the GPD facility. To assess the veterans’ success, VA has relied chiefly on measures of veterans’ status at the time they leave the GPD program rather than obtaining routine information on their status months or years later. In part, this has been due to concerns about the costs, benefits, and feasibility of more extensive follow-up. However, VA completed a onetime study in January 2007 that a VA official told us cost about $1.5 million. The study looked at the experience of a sample of 520 veterans who participated in the GPD program in five geographic locations, including 360 who responded to interviews a year after they had left the program. Generally, the findings confirm that veterans’ status at the time they leave the program can be maintained. We recommended that VA explore feasible and cost-effective ways to obtain information on how veterans are faring after they leave the program. We suggested that where possible they could use data from GPD providers and other VA sources, such as VA’s own follow-up health assessments and GPD providers’ follow-up information on the circumstances of veterans 3 to 12 months later. VA concurred and told us in 2007 that VA’s Northeast Program Evaluation Center is piloting a new form to be completed electronically by VA liaisons for every veteran leaving the GPD program. The form asks for the veterans’ employment and housing status, as well as involvement, if any, in substance abuse treatment, 1 month after they have left the program. While following up at 1 month is a step in the right direction, additional information at a later point would yield a better indication of longer-term success. Mr. Chairman, this concludes my remarks. I would be happy to answer any questions that you or other members of the subcommittee may have. For further information, please contact Daniel Bertoni at (202) 512-7215. Also contributing to this statement were Shelia Drake, Pat Elston, Lise Levie, Nyree M. Ryder, and Charles Willson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Subcommittee on Health of the Committee on Veterans' Affairs asked GAO to discuss its recent work on the Department of Veterans Affairs' (VA) Homeless Providers Grant and Per Diem (GPD) program. GAO reported on this subject in September 2006, focusing on (1) VA's estimates of the number of homeless veterans and transitional housing beds, (2) the extent of collaboration involved in the provision of GPD and related services, and (3) VA's assessment of program performance. VA estimates that about 196,000 veterans nationwide were homeless on a given night in 2006, based on its annual survey, and that the number of transitional beds available through VA and other organizations was not sufficient to meet the needs of eligible veterans. The GPD program has quadrupled its capacity to provide transitional housing for homeless veterans since 2000, and additional growth is planned. As the GPD program continues to grow, VA and its providers are also grappling with how to accommodate the needs of the changing homeless veteran population that will include increasing numbers of women and veterans with dependents. The GPD providers we visited collaborated with VA, local service organizations, and other state and federal programs to offer a broad array of services designed to help veterans achieve the three goals of the GPD program--residential stability, increased skills or income, and greater self-determination. However, most GPD providers noted key service and communication gaps that included difficulties obtaining affordable permanent housing and knowing with certainty which veterans were eligible for the program, how long they could stay, and when exceptions were possible. VA data showed that many veterans leaving the GPD program were better off in several ways--over half had successfully arranged independent housing, nearly one-third had jobs, one-quarter were receiving benefits, and significant percentages showed progress with substance abuse, mental health or medical problems or demonstrated greater self-determination in other ways. Some information on how veterans fare after they leave the program was available from a onetime follow-up study of 520 program participants, but such data are not routinely collected. We recommended that VA take steps to ensure that GPD policies and procedures are consistently understood and to explore feasible means of obtaining information about the circumstances of veterans after they leave the GPD program. VA concurred and, following our review, has taken several steps to improve communications and to develop a process to track veterans' progress shortly after they leave the program. However following up at a later point might yield a better indication of success.
OSHA established the Consultation Program in 1975 as a mechanism, separate from its enforcement program, to reduce workplace injuries and illnesses, especially for small employers who often cannot afford in-house or private sources of assistance. The program operated in the shadow of OSHA’s much larger and more visible enforcement program until the mid- 1990s, when OSHA began to give greater emphasis to consultation. Consistent with that emphasis, funding for the Consultation Program increased over 50 percent between fiscal years 1996 and 2001. Many knowledgeable officials see this trend toward cooperation as enhancing OSHA’s overall efforts to protect workplace safety and health. However, program activity for the same period did not increase substantially with the increase in funding. For example, the number of total consultation visits increased about 2 percent nationwide between fiscal years 1996 and 2000 (the last year for which complete program data are available)—from 25,986 to 26,418. Further, the number of total hazards identified through the program decreased about 8 percent—from 188,577 to 171,167. Figure 1 shows the yearly change in funding compared with the change in the number of total visits from fiscal years 1996 to 2000. With one exception, each state has designated a single agency (i.e., labor, commerce, health, or environmental protection) or a state university to deliver the consultation services. The state entities running this program have significant flexibility for delivering the consultations. However, there are procedures and requirements codified in regulation that each state consultation program must follow. State consultation programs are required, for example, to give service priority to small, high-hazard employers and ensure that worker representatives are involved in initial and closeout meetings with the employer. With the passage of the Government Performance and Results Act of 1993 (GPRA), OSHA established three agencywide strategic goals related to improving workplace safety and health. The first goal, in particular, is to “improve workplace safety and health for all workers as evidenced by fewer hazards, reduced exposures, and fewer illnesses, injuries, and fatalities.” Under this strategic goal, OSHA established several performance goals, most recently for fiscal years 1997-2002, that contained specific objectives for reducing workplace injuries, illnesses, and fatalities. As shown in table 1, OSHA has identified four of these performance goals as being applicable to the Consultation Program. OSHA established these strategic and performance goals for the 31 consultation programs in states (29 states, the District of Columbia, and Guam) for which OSHA has primary enforcement authority. These states are under federal OSHA jurisdiction for workplace safety and health. The remaining 19 state consultation programs operate in states that have their own safety and health programs. Each of these “state-plan” states has adopted OSHA’s first strategic goal but has developed its own related performance goals. OSHA developed a database system—the OSHA Performance and Tracking Measurement System (OPTMS)—to obtain information on activities related to OSHA’s GPRA goals. According to OSHA officials, this Web-based system requires no additional reporting because it includes 276 data elements that already exist in other OSHA data systems. One hundred of the elements are relevant to the Consultation Program. For example, OPTMS tracks the number of consultations that are performed in targeted industries, as well as the number of employers participating in the Consultation Program that develop a safety and health program. Only the consultation programs in the 31 federal OSHA states provide data into OPTMS for evaluation. OSHA also uses its Integrated Management Information System (IMIS) to collect agencywide data, including data on the activities of the Consultation Program. Consultation Program managers at OSHA use IMIS to compile two reports providing data specific to the Consultation Program. The first of these, the Consultation Activity Measures Report (CAM), tracks 18 quantitative indicators of the Consultation Program’s activities, such as the number of days from request to visit and the number of hazards identified per visit. The second, the Mandated Activities Report for Consultation (MARC), tracks five indicators of the Consultation Program’s activities that reflect regulatory requirements, including the proportion of visits that were made to high-hazard establishments and the number of visits to smaller employers. In addition, to help measure the Consultation Program’s progress toward achieving performance goals, OSHA introduced, for the first time in fiscal year 2000, a process for use by its regional offices to provide guidance to the state consultation programs. Under this process, the regional offices are supposed to assist each state consultation program in developing an annual project plan that contains goals. Each state consultation program is supposed to develop an end-of-year performance report that the regional offices review. The regional offices also prepare reports on the state consultation programs under their jurisdiction. The first of both these year-end reports was under way during our audit work, so they were too new to evaluate. To fund the Consultation Program, OSHA provides grants to the state entities delivering the services. OSHA provides 90 percent of the funds needed to carry out the program, and the state consultation programs provide the remaining 10 percent. OSHA gives each state consultation program a base amount equal to the funding it received during the prior year plus any cost-of-living adjustment. If there are any funds remaining, OSHA uses a formula it instituted in fiscal year 1999 to distribute them to state consultation programs. As part of this formula, OSHA first divides the remaining funds in half. It distributes the first half based on the share of the overall funding that each state consultation program has. It then takes the second half and distributes it according to yearly data for three factors that approximate a program’s workload—level of gross state product, total state nonfarm employment, and number of small high- hazard establishments in the state. For each year during fiscal years 1996- 2000, each state consultation program received a funding increase. The states must obligate the federal funds before the end of the fiscal year for which they were appropriated, or else they expire and are no longer available to the Consultation Program. Industry associations as well as participating employers identified the opportunity to improve workplace safety and health and to prepare for or preempt an OSHA inspection as two incentives for participation in the Consultation Program. Industry association officials, worker representatives, and participating employers we interviewed also identified concerns about costs to address identified hazards and the qualifications of program consultants as two disincentives. Industry associations as well as participating employers identified the opportunity to minimize worker injuries and illnesses and otherwise improve workplace safety and health as one incentive for participating in the Consultation Program. They saw activities to promote workplace safety and health as an opportunity to reduce the number of employee workdays lost because of injuries and illnesses; retain experienced workers and minimize turnover; promote strong labor-management relations, particularly in union workplaces; and possibly reduce workers’ compensation costs. Even employers that had in-house expertise on workplace safety and health said that the Consultation Program could help them identify hazards that they might otherwise have overlooked and resolve problems that previously they had not addressed directly. Some industry associations, as well as participating employers, identified the Consultation Program’s potential for reducing the number of citations (and fines) that might result from a subsequent OSHA inspection as a second incentive. Although they saw safety as the factor driving employers to request consultations, several industry associations said that an imminent inspection would serve as a strong incentive for their members to participate in a consultation. Many industry associations believed that an employer who had a consultation would likely fare better if an inspection did occur. Several of the employers we interviewed who had participated in consultations said that their primary incentive for doing so had been to prepare for or preempt an inspection and the possibility of substantial fines. These employers generally had been notified of a possible inspection, either directly by the state or federal OSHA, or more indirectly through industry associations, business newsletters, or presentations that OSHA was targeting their industry. OSHA officials as well as most of the state consultation program officials we interviewed echoed the view that there is a strong positive correlation between anticipated inspections and an employer’s request for a consultation. They provided examples, which follow, of the effect of an increased emphasis on inspections. In 1993, Maine OSHA implemented the “Maine 200” program, through which it contacted the 200 Maine employers with the highest number of serious workplace injuries and illnesses. OSHA offered these employers a choice of working cooperatively with OSHA to address workplace hazards or remaining on a list of work sites that would likely be inspected. About 98 percent of these employers chose the former. According to a Maine consultation program official, many of these employers turned to the Maine consultation program for assistance in identifying and correcting hazards. Maine consultants added that subsequent enforcement programs, focused on local employers, have also been very successful in encouraging employers to use the Consultation Program. In 1994, California initiated a state-funded consultation program, called the Targeted High Hazard Consultation Program, through which California sends letters to employers with high workers’ compensation rates to notify them that they will be inspected unless they seek help from available consultation programs or their insurance carrier. According to California program officials, this effort increased the number of consultations requested from the state-funded consultation program. This program was recently merged with the state OSHA-funded consultation program. Industry association officials and employers we interviewed identified concerns about the costs of correcting hazards and the qualifications of program consultants as disincentives. First, officials from both groups stated that employers are concerned that the cost of correcting the hazards identified by a consultation might be prohibitive. They feared that if they could not afford to correct all the hazards in an acceptable time frame, the consultant might report them to OSHA for an inspection. Consultants and state program managers we interviewed said that, in their experience, the costs associated with addressing hazards are generally manageable, but fears about costs still persist among employers. Second, industry associations and employee representatives expressed concern that some state consultation programs employ individuals who lack the appropriate credentials or experience to serve as workplace safety and health consultants. Some states do not require that consultants have advanced or specialized degrees. Other industry associations and employee representatives said that even those consultants with credentials in workplace safety or health-related fields might not have the industry expertise necessary to identify and suggest remedies for hazards. One area identified with regard to this concern was the health industry. General distrust of OSHA was another factor identified as a disincentive for participation in consultations. Employers feared that consultants would inappropriately provide information about the consultation to OSHA, prompting an inspection. However, none of the industry association representatives or employers we spoke with knew of any instances where this had actually happened. Related to this concern was the perception that state consultation programs were oriented more favorably toward workers than employers. Industry associations and employers identified recently developed regulatory requirements that they believed reinforced this perception. These included the requirement that employers permit an employee representative to participate in all phases of the consultation visit and the requirement to post a list of the hazards identified by the consultation and the date by which the hazards are to be corrected. Although employers have both incentives and disincentives for participating in the consultation program, they may be unaware, in general, of the program and what it offers. Industry association officials said that they do not often see the Consultation Program promoted at key employer events. In addition, they did not think that the Consultation Program was adequately promoted as part of other federal efforts that help small employers establish and maintain safe workplaces. Employee representatives echoed this sentiment, saying that they did not believe that most employees knew about the program. Others said that even if the employers were aware of it, they might not know who can participate or how to initiate a consultation. During the course of our work, we found several state consultation programs that were attempting to address some of these disincentives. For example: Maine’s promotion program. The Maine consultation program has developed a campaign that, among other things, addresses the disincentive posed by employers’ concerns about costs. This program has used state funds to develop a promotion campaign, called Safety Works, which emphasizes that, in the long run, employers will very likely save money by eliminating serious workplace hazards. This campaign also emphasizes that consultants are willing to work with employers to identify remedies that do not financially overburden the employer. Maine also provides low- cost loans to the employer, if necessary, to address the identified hazards. Pennsylvania’s university-based program. The Pennsylvania consultation program operates out of the Indiana University of Pennsylvania and is one of eight state programs located at a state university. Between fiscal years 1996 and 2000, the program experienced an 88-percent increase in requests for consultation visits, while nationwide requests for consultation visits increased only marginally. Operating out of a university has allowed the Pennsylvania program a number of benefits. First, potential clients are less likely to perceive university-based programs as an extension of OSHA. Second, the university can attract higher qualified consultants than the state government because it can offer higher salaries than the state government as well as competitive benefits. For example, it offers consultants the opportunity to take sabbaticals from consultation to teach at the university. In measuring its progress toward reaching its GPRA performance goals for reducing injuries and illnesses, OSHA does not seek to isolate the contributions of each of its program activities (e.g., consultation or compliance inspections). For this and other reasons, the agency cannot measure the impact of the Consultation Program. Specifically, the agency cannot measure the extent to which the program contributes to accomplishing OSHA’s goals for reducing the number of workplace injuries and illnesses associated with targeted industries or hazards. Similarly, the agency cannot measure the extent to which the program contributes to the agency’s reaching its nationwide goal. Finally, state consultation programs have concerns about OSHA’s system for collecting activity information, which they characterized as burdensome and inefficient. OSHA cannot assess the extent to which its Consultation Program is helping to achieve OSHA’s goals. OSHA employs a three-step process to measure its success at reducing injuries and illnesses in targeted areas that uses data from OPTMS and nationwide data from the Bureau of Labor Statistics (BLS). Using OPTMS, OSHA first identifies all agency activities in a given period that were directed toward OSHA goals. OSHA then uses BLS data to measure changes in the level of workplace injuries or illnesses in targeted industries or caused by targeted hazards. If the level of measured injuries and illnesses declines, OSHA infers that the activities it conducted contributed to that decline. There are three reasons why OSHA’s efforts do little to allow it to identify the impact of the Consultation Program. First, in conducting its analysis, OSHA collects data only from the 31 state consultation programs located in federal OSHA states, meaning that the activities of the 19 “state plan” state consultation programs are not reflected. Second, in assessing progress toward its goals, OSHA does not isolate the activities of the Consultation Program from those of its other programs (such as enforcement), which means that OSHA does not know the relative impact of the Consultation Program on the achievement of its goals. Third, even if OSHA were able to identify the separate activities of the Consultation Program, its analysis would not demonstrate a direct causal link between the services offered by the Consultation Program and nationwide changes in workplace safety and health, which may also result from other influences. Establishing a better connection between the Consultation Program and reductions in the number of injuries and illnesses among workers would require having data from all state programs and all employers receiving consultations, both before and after the consultation. Other factors, such as new management, might affect the level of injuries and illnesses at a workplace. Nonetheless, this information could give OSHA a better understanding of the outcomes of the Consultation Program than it has now. The Consultation Program currently collects data on workers’ injuries and illnesses in preparation for each consultation visit, but according to OSHA officials, the program has been hesitant to collect the same data for the period following the consultation. OSHA has been reluctant to collect these data because agency officials believe that such data may not be available and because their collection may burden employers, be outside the agency’s authority, or raise issues regarding the confidentiality of employer information. While these issues need to be considered, they do not appear to be insurmountable. There does not appear to be any prohibition against OSHA’s obtaining this information from all 50 state programs, and several state programs collect this information already. Also, consultants told us that they believe there are ways for OSHA to obtain this information while maintaining its pledge of confidentiality. For example, state program consultants could collect and enter aggregate information into OSHA’s data system (i.e., compiled into industry or statewide information) without divulging establishment names. OSHA also experiences difficulty in assessing the Consultation Program’s contribution to the agency’s nationwide performance goal—reducing by 20 percent the number of worker injuries and illnesses at 100,000 workplaces where it initiates an intervention. To measure its progress toward this goal, OSHA conducted a study in which it used two databases—one maintained by BLS and the other in-house—to identify the number of workplaces where injuries and illnesses had been reduced by 20 percent from October 1995 to March 1997. This report concluded that OSHA was making progress toward achieving this goal. However, it also noted that the study was not designed to isolate the effects of different types of interventions. Thus, it could not be used to evaluate the impact of the Consultation Program. State consultation program managers, as well as OSHA officials, expressed concern about the burden placed on state programs by OSHA’s data reporting requirements. This was especially true in those cases where these programs also had state-imposed reporting requirements. State program officials said that they believed much of the time spent on data reporting was wasted because the reporting requirements focus more on the number of hazards identified than on how effectively the hazards were remedied. Consultants with the Maine consultation program stated that they devote half their time to fulfilling reporting requirements, while New York program officials said that, for each day spent with employers, consultants spend 2½ days in the office complying with data reporting requirements, including the consultation report for the employer. In California, consultation officials estimated that the state program could increase the number of consultations they did by as much as 20 percent if federal OSHA reporting requirements were reduced. In addition, the number of activity indicators that OSHA tracks—over 100 indicators that are relevant to the state consultation programs in the MARC, CAM, and OPTMS systems—dilutes OSHA’s ability to communicate the program’s key goals and priorities to states. State consultation program officials said that the large number of activity indicators tracked by OSHA makes it difficult to determine which activities—and associated goals—should have priority. For example, OSHA tracks the number of consultation visits state consultation programs make. This may signal states to give priority to conducting as many visits as possible to many different employers. However, OSHA also tracks the number of consultations that result in employers developing safety and health programs. This may signal states to give priority to conducting numerous visits to a single employer. While focusing more intensely on individual employers may increase the likelihood that employers develop health and safety programs, at the same time, it may potentially reduce the number of consultations overall. State consultation program managers also said that the sheer number of activities OSHA tracks means that state programs cannot pursue all of the indicators. As a result, program managers tend to select the three to five that they believe are most important. For example, one state program manager we visited focused almost entirely on initial visits, hazards corrected, and timeliness of the report; another placed greatest emphasis on total visits, combined safety and health visits, and certain targeted goals. Finally, state consultation program officials told us that indicators used, some of which were developed in the 1980s, do not reflect how some state consultation programs currently operate. For example, the New York and California programs focus on providing employers with long-term assistance that includes, among other things, extensive follow-up, technical assistance regarding the best ways to correct hazards, training for management and workers, and guidance on the proper installation of new equipment and the introduction of new processes. However, these activities are not represented in current indicators. The California, Maine, and New Jersey programs have placed increased emphasis on promoting the Consultation Program within the local business community by making presentations, participating in conferences, and utilizing electronic and print media. Although OSHA currently collects information on activities intended to promote the program, it does not analyze this information. OSHA officials, as well as every Consultation Program manager we interviewed, acknowledged that OSHA needs to replace the current performance measurement system with a less burdensome and more relevant system. For the past 8 years, OSHA Consultation Program officials have recognized the need for a separate system for obtaining activity data that would operate outside IMIS; however, resource constraints and higher priorities within the agency have prevented this. OSHA has been attempting to improve the electronic transfer of data from the states into IMIS, and Consultation Program staff at OSHA have been told that they cannot make any significant changes to the IMIS-based data systems until this project to update IMIS is completed. In the meantime, OSHA officials said that they have told the state programs that many of the indicators, such as those in the CAM, should no longer drive program activity. However, OSHA continues to require state programs to provide data on all of these indicators and includes some of the indicators in newly developed yearly program monitoring reports that each state program must complete. OSHA’s process for allocating federal funds to state consultation programs does not encourage these programs to make effective use of these funds. Because OSHA does not collect data to assess the Consultation Program’s performance, it cannot consider such data in its allocation process. Also, even though OSHA collects data on state program activity, it does not use this information to allocate funds to state consultation programs. It also does not consider state programs’ ability to use allocated funds. As a result, state consultation programs receive additional funds each year regardless of prior performance. At the same time, OSHA’s 10 regional offices do not pursue consistent policies for conducting fiscal audits of consultation programs within their jurisdictions and lack ready access to spending information needed to oversee the operations of consultation programs. In allocating funds to state consultation programs, OSHA does not factor in the programs’ activity levels. As a result, programs can routinely decrease the number of consultations they conduct and the number of hazards they identify and still receive funding that is the same or greater than they received in the previous fiscal year. All state consultation programs received increases in funding between fiscal years 1996 and 2000, even though 16 consultation programs experienced often significant decreases in activity levels, as shown in table 2. Although neither OSHA nor state program officials we interviewed have analyzed why one-third of the state consultation programs appeared to be doing less with additional funds, they did identify several factors that might affect activity levels. As shown in table 3, these included, among others, data collection mechanisms that do not reflect all the activities that programs pursue, data collection requirements that potentially decrease time available for consultation, and hiring freezes that result in unfilled consultant positions. However, OSHA officials agreed that, at some point, decreases in basic program activities, such as the number of consultation visits or hazards identified, raise questions about resource utilization. We also found that OSHA provided increased funding to states even if they had a history of being unable to use their funding allocations for prior years. In total, 31 of the programs were unable to use all their funds at least once between fiscal years 1996 and 1999. (How many state programs will have this experience in fiscal year 2000 is unknown because they are still reporting their expenditures to OSHA.) Table 4 identifies the 7 state programs that were unable to use all of their funds every year during that period. In addition, 8 other programs were unable to use all their funds for 3 of those 4 years, 10 programs had this experience for 2 of those years, and 6 programs had this experience for 1 year. Unused funds complicate OSHA’s ability to ensure that funds are used to support the achievement of its strategic goals. State consultation programs sometimes notify OSHA of their inability to use the funds, in which case OSHA can reallocate those funds to other programs. However, because these funds arrive late in the year and will not be made available in subsequent years, OSHA instructs the state consultation programs not to use them for salaries or fringe benefits, other than one-time bonuses. As a result, state consultation programs generally use these funds to buy equipment or other supplies. In other cases, state consultation programs that cannot use the funds do not alert OSHA to that fact, in which case all unobligated funds expire at the end of the fiscal year and are no longer available for use. OSHA, in partnership with the states, changed its allocation process starting in fiscal year 1999, but did not use this opportunity to factor in activity levels or use of funds. By not doing so, OSHA lost the opportunity to use the funding allocation process to influence state program performance. As noted earlier, under OSHA’s current allocation process, state programs are ensured a level of funding equal to what they received in the previous year, as long as the program budget has not decreased. In addition, state programs are awarded additional funds based on their size rather than their activity levels or a demonstrated ability to use the funds allocated to them. As a result of its limited monitoring, OSHA lacks the information necessary to identify which state programs need additional guidance to achieve agency goals. OSHA’s 10 regional offices, which are responsible for monitoring state consultation program expenditures, have inconsistent policies concerning when, and under what circumstances, they conduct audits of the state consultation programs. We found that some regional offices conducted annual financial audits of state consultation programs in their jurisdiction, while others had not audited state programs under their jurisdiction for as long as 5 years. As of August 2001, seven OSHA regional offices had not audited 17 state consultation programs under their jurisdiction for over a year. For example, Region 3 (Philadelphia) had not audited the Maryland consultation program ($682,000 budget in fiscal year 2000) for 5 years. OSHA Region 9 (San Francisco) had not audited the largest program, California ($4.45 million budget in fiscal year 2000), in 2 years. Region 9 also had not audited the Guam and Hawaii programs in 2 years, despite knowing about the significant management and fiscal difficulties these programs have. Because the relative amounts provided to each state for the consultation program are small in comparison with other federal programs, the expenditure of these funds is also not likely to be audited by any other entity. OSHA officials said they have not provided guidance to regions on the appropriate interval between routine financial audits or the conditions that might warrant an out-of-cycle audit. OSHA officials also said that they would like the regions to conduct more audits but the decline in staff resources in the field has adversely affected the regions’ ability to do so. OSHA’s regional offices also do not have access to financial data demonstrating how state consultation programs spend federal funds by object class (i.e., salaries, fringe benefits, travel, equipment, supplies, contracts, and other expenses) unless they conduct an audit or obtain special authority from the Office of Management and Budget (OMB) to routinely collect this information. At the beginning of each fiscal year, state consultation programs report how they plan to spend the funds by object class; however, there is no requirement to provide a detailed report showing how the funds were actually spent. Instead, they are required to report quarterly and yearly a single figure showing only total expenditures. OSHA officials said they would like to collect detailed information on expenditures to improve monitoring. We found that most of the state consultation programs we contacted had this information readily available and did not believe it would be an onerous requirement to provide it routinely to OSHA. To date, OSHA has not petitioned OMB to collect this information. Because OSHA’s regional offices do not have ready access to detailed information on expenditures, they cannot readily monitor the actual use of funds or state reprogramming of funds from one object class to another. This could lead state programs to deviate from agreed-upon spending patterns without informing OSHA or, potentially, to use funds for inappropriate purposes. For example, of the 12 programs that experienced declines in their level of activity from fiscal years 1996 to 2000 and provided us with information on actual expenditures, half reported that they reprogrammed at least 20 percent of the funds they originally projected to spend on salaries and benefits to other spending categories in at least 1 fiscal year. There is insufficient information to know whether these activities violated program requirements or resulted in inappropriate expenditures. However, in some cases, it appears that staffing or other long-standing limitations at the state level could raise questions about the basis for the state consultation program’s original funding request. For example, in some cases, state-imposed hiring or salary freezes had been in place for years, which made it unlikely that vacancies could have been filled. However, state program officials continued to request funds to fill these vacancies. When the vacancies were not filled, they reprogrammed these funds into other areas. While state consultation programs must comply with federal review procedures to reprogram federal funds, regional office officials stated that these were not entirely effective tools for keeping up with such actions by state programs. As a result, OSHA lacks information on the extent to which state programs are using the funds or potentially reprogramming them into other areas, and it does not know whether state programs are using the funds in the best way to achieve agency goals. Two large consultation programs illustrate the potential problems caused by this lack of oversight. Both programs experienced decreases in program activity between fiscal years 1996 and 2000 and appeared to reprogram large amounts of funding, not necessarily with the knowledge of OSHA regional officials. For each of fiscal years 1996-2000, the first of these state consultation programs submitted to OSHA budget projections that included over $300,000 for indirect administrative charges, for a total of approximately $1.76 million. However, end-of-year information on actual expenditures showed that, for the 5-year period, this program spent approximately $87,000 (or less than 5 percent) of the $1.76 million on indirect administrative costs. Since fiscal year 1996, the second state consultation program had been developing its planned expenditures on the assumption that the program would be fully staffed. It did so with full knowledge that this was unlikely to occur, given that the program was experiencing high and increasing vacancy rates for consultants (37 percent in fiscal year 2000). As a result, the program reprogrammed much of the federal funds slated for paying salaries and benefits to consultants into supplies and equipment or contracts to obtain promotional services. It also used some of the excess personnel funding to make up for the shortfall in salaries and benefits paid to existing staff who received higher than the minimum pay in their grade. (According to officials with this consultation program, state agencies are required to budget salaries and benefits as if all staff received the lowest pay in their grades.) Many believe that OSHA’s increased effort to emphasize cooperation rather than confrontation—as signaled by the increase in funds to the Consultation Program—is a move in the right direction. Yet OSHA has made this commitment without establishing the performance measurement system needed to determine how well the Consultation Program contributes to OSHA’s central mission and resulting goals— reducing workplace injuries and illnesses. OSHA does not know to what degree it can rely on consultation activities to achieve these goals or the extent to which it should use consultations in combination with its enforcement activities. Having adequate goals and measurement systems for the Consultation Program would allow OSHA to establish a link between voluntary, cooperative efforts at employers’ workplaces and any subsequent reductions in worker injuries and illnesses at those workplaces and potentially increase the value of the program for employers and workers. Although OSHA is in the process of updating IMIS, these efforts, in themselves, will not improve the agency’s ability to assess the Consultation Program’s performance, reduce the reporting burden faced by program managers, or address the confusion that results from programs having too many indicators to track. Without assessing the kind of information it needs to measure the Consultation Program’s progress toward attainment of basic agency goals and including state program managers in this process, OSHA will be unable to identify those indicators that provide the best measure of program performance and eliminate those that pose an unnecessary burden on consultants. Moreover, it will be unable to focus state consultation programs on those activities that best match the agency’s priorities. OSHA has not used its funding allocation process to influence the activities of state programs. In the absence of a link between a state consultation program’s performance and its funding, state programs have continued to receive funds equal to what they have received in previous years even if they reduced the number of consultations they conduct and the number of hazards they identify. Large programs will continue to be rewarded the lion’s share of funding simply because they are large, rather than because they are successful. Factoring a program’s performance into the allocation process is no easy task, nor can it necessarily be accomplished within a short time frame. However, it is very much in line with the GPRA goal of encouraging agencies to be results oriented and to ensure that the Consultation Program, as part of OSHA, is doing what it is supposed to do. Without a link between funding and performance, there can be no assurance that federal funds are working toward the achievement of agency goals. With regard to financial oversight, OSHA has insufficient knowledge about how program funds are being spent. In the absence of clear guidance from OSHA regarding what factors warrant financial audits, the frequency with which regional offices audit state consultation programs will continue to vary. Moreover, programs that are known to have significant fiscal and management problems, significant declines in program activity, or who request funds each year for one purpose and then reprogram them for another, will be able to continue these practices unchecked. OSHA may have the opportunity to collect the necessary expenditure information without auditing state programs, but such action may need approval from OMB. Correcting these problems is challenging because doing so requires determining how much of the agency’s limited oversight resources should be devoted to ensuring that funds are spent properly, even when the amount of these funds might be relatively small. However, without such information, OSHA is unable to show itself or others how well state consultation programs are utilizing federal funds. To strengthen OSHA’s ability to assess the Consultation Program’s progress toward key agency goals, we recommend that the Secretary of Labor direct the Assistant Secretary for Occupational Safety and Health to require state consultation programs to collect and forward to OSHA data on injuries and illnesses from employers participating in the Consultation Program at some period after the consultation is completed for use in analyzing whether there is a relationship between participation in the program and reductions in workplace injuries and illnesses and review reporting requirements with an eye toward eliminating indicators that no longer reflect the program while adding new ones that do. Decisions about which indicators to eliminate or develop should be driven by the kind of information needed to measure progress toward goals and send a clear message to state programs on agency priorities. Any decisions should be accomplished in cooperation with state program managers and should ultimately contribute to reducing the reporting burden on state consultation programs. This process should be a key component of any upgrade the agency performs on IMIS. To help OSHA ensure that the funding allocation process encourages state consultation programs to work toward agency goals, we recommend that the Secretary of Labor direct the Assistant Secretary for Occupational Safety and Health, in cooperation with state partners, to develop a plan and timetable for factoring incentives into the allocation process. In so doing, OSHA may want to (1) develop performance goals for inclusion in the allocation formula or (2) set aside up to 20 percent of consultation funds for distribution by the Secretary in accordance with separate criteria that reward good performance or address specific state program needs. To help ensure better oversight of state expenditures of consultation funds, we recommend that the Secretary of Labor direct the Assistant Secretary for Occupational Safety and Health to provide specific guidance to the regional offices regarding the monitoring of state-level expenditures of Consultation Program funds that includes criteria or situations under which regional offices should review program spending or conduct audits of expenditures and seek to routinely obtain program expenditure data by object class from state consultation programs, either through conducting more frequent financial audits or by obtaining the necessary authority from OMB. OSHA generally agreed with our recommendations and said the report would help improve the Consultation Program (see app. I). OSHA’s most significant concern with the draft report related to the extent to which it should be held accountable for measuring the Consultation Program’s contribution toward achieving GPRA performance goals. Among other things, OSHA stated that it did not believe that GPRA requires annual measurement of individual programs’ contributions toward meeting these goals. As such, OSHA noted that it had chosen to evaluate the effect of its entire complement of programs (consultation, enforcement, and others) on achieving performance goals related to reducing workplace injuries and illnesses. We did not examine whether OSHA was technically in compliance with GPRA requirements. However, regardless of the answer to this question, we believe that it is important for OSHA to know the extent to which the Consultation Program is contributing to the agency’s primary mission. This type of information is central to managing the agency’s resources. As such, we were pleased to note that OSHA acknowledged the need to find opportunities to evaluate the effectiveness of its individual programs. OSHA suggested a number of technical changes to improve the accuracy of our report, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Labor, the Assistant Secretary of Labor for Occupational Safety and Health, and the Director of OMB. We will also make copies available to others upon request. Please contact me or Lori Rectanus on (202) 512-7215 if you or your staff have any questions about this report. Other contacts and staff acknowledgments are listed in appendix II. The following are GAO’s comments on the OSHA letter dated September 26, 2001. 1. We clarified report language concerning the purpose of this study. 2. We modified our recommendation to recognize the importance of state participation in deliberations leading to changes in the funding allocation process. In addition to those mentioned above, Patrick J. diBattista, Dennis M. Gehley, Julian P. Klazkin, Leslie E. Pastrano, John G. Smale, Jr., and Wayne J. Turowski made key contributions to this report.
Several factors affect employers' decisions to participate in the Occupational Safety and Health Administration's (OSHA) consultation program. GAO surveyed industry associations, employee representatives, and participating employers and found that the two main incentives for program participation are (1) making the employer's workplace safer and reducing worker injury and illness by promoting workplace safety and health and (2) preparing the employer's workplace for an OSHA inspection. The measurement system OSHA uses lacks enough data to separate the program outcomes from the outcomes of OSHA's other efforts to reduce workplace injuries and illnesses. OSHA's process for allocating funds to the state consultation programs plays no role in encouraging participating states to achieve agency goals.
Since the introduction of turbojet aircraft in the late 1950s for commercial passenger service, airport-related noise has generated controversy with many surrounding communities and emerged as a constraint on airport development. Airport-related noise emanates primarily from the takeoff and landing of aircraft, but engine maintenance and the taxiing of aircraft on runways are some of the other activities that also contribute to airport- related noise. New technology has been making aircraft quieter, and since 1969 the Federal Aviation Administration (FAA) has been limiting the noise that various aircraft are allowed to make. As a result, FAA estimates that the population exposed to very high noise levels will have declined from 7 million in 1975 to an estimated 600,000 in 2000. But in spite of the recent transition to quieter aircraft, expected growth in air traffic may result in little or no net reduction in overall noise levels generated by individual airports. Furthermore, concerns about airport-related noise may impede the development of any needed additional capacity in the national network of airports. The Congress has recognized the importance of developing a safe and efficient national airport system that meets the nation’s present and future aviation needs. As a result, federally authorized investment in a national airport system, including noise reduction projects, has totaled about $3 billion a year in recent years. FAA has primary responsibility for implementing federal programs addressing noise issues associated with civilian airports. In order to facilitate the development of a safe and efficient national airport system, FAA undertakes several activities that help airports and communities reduce airport-related noise or mitigate its effects. FAA must consult with the Environmental Protection Agency regarding some of its responsibilities. FAA’s activities focus on three areas: (1) reducing aircraft- generated noise at its source—the aircraft; (2) changing an airport’s use of its runways and/or implementing different flight operations; and (3) mitigating the effects of existing noise levels on surrounding communities. Airport-related noise can be lowered by reducing the noise that aircraft emit when they take off from and land at airports. New technology allows aircraft manufacturers to design and construct quieter aircraft. For aircraft already in service, noise levels can be reduced by (1) installing quieter engines, (2) installing equipment that reduces the noise of existing engines, and (3) modifying aircraft use and operations in ways that reduce aircraft- generated noise. FAA has actively engaged in efforts to reduce aircraft noise since the 1960s. The agency sets the noise standards aircraft must meet to be certified as airworthy and establishes the regulations that govern the operation of those aircraft at U.S. airports. The Federal Aviation Act of 1958, as amended in 1968, gave FAA the authority to regulate aircraft design and equipment in order to reduce noise. Pursuant to that act, FAA issued regulations in 1969 that established noise standards for new designs of civil subsonic turbojet aircraft. According to an aircraft design expert, the purpose of those noise standards was to ensure that the best available noise reduction technology was used in new aircraft designs. Initially, these regulations prescribed noise standards that applied only to new types or designs of turbojet aircraft (as well as certain propeller aircraft). In 1973, FAA amended its regulations to apply the noise standards to all newly manufactured aircraft, whether or not the aircraft design was new. In 1977, additional amendments established lower noise standards for all new aircraft, as well as the concept of noise “stages.” Aircraft meeting the original 1969 standards were categorized as “stage 2” aircraft; those meeting the more stringent 1977 standards were categorized as “stage 3” aircraft; and aircraft meeting neither standard were categorized as “stage 1” aircraft. In addition to establishing noise standards, FAA controls aircraft noise by regulating aircraft operations. In 1976, FAA amended its regulations to prohibit all certificated stage 1 subsonic turbojet aircraft weighing more than 75,000 pounds from flying into or out of U.S. airports after January 1, 1985, unless their engines had been modified or replaced to enable them to meet the stage 2 or stage 3 noise standards. However, the Aviation Safety and Noise Abatement Act of 1979 directed FAA to grant exemptions from compliance until January 1, 1988, to turbojet aircraft with two engines and fewer than 100 passenger seats. In 1990, the Airport Noise and Capacity Act required civil subsonic turbojet aircraft weighing more than 75,000 pounds to comply with stage 3 noise standards by December 31, 1999, or be retired from service. To meet this requirement, the engines on stage 2 aircraft could be modified or replaced. In addition to regulating aircraft-generated noise, FAA supports aviation research related to noise. In particular, FAA is working with the National Aeronautics and Space Administration to develop new technology to reduce aircraft noise. By changing use and/or operations, airports can reduce airport-related noise or mitigate its effects. For example, an airport can restrict noisy aircraft maintenance activities to areas where noise barriers can muffle the sound. Aircraft arrival and departure flight paths, as well as runway use, can be changed to minimize flights over densely populated areas. Airports can also mitigate noise impacts by seeking FAA approval to restrict certain aircraft to takeoffs and landings during the day, when their impact on nearby communities is considered less than during the night. FAA is involved in many of these activities. For example, FAA must approve any restriction on an aircraft’s access to an airport or on allowable noise levels if the restriction involves stage 3 aircraft or is beyond those imposed by federal regulations. Thus, if an airport wants to restrict any stage 3 aircraft to daytime operations, it must obtain FAA’s approval. FAA must also approve and implement changes in flight paths. Furthermore, FAA administers airport development funding programs, which can finance the construction of runways and taxiways that enable aircraft to use different takeoff and landing routes to minimize flights traveling over densely populated areas. Mitigation activities can reduce the impact of airport-related noise on the communities surrounding an airport. For example, buildings in nearby communities can be soundproofed and building codes can be changed to require improved sound suppression construction; noise barriers can be constructed; and airports can acquire land to prevent uses that are incompatible with the prevailing noise exposure levels. Communities can also exercise their authority over land use planning to help prevent the future development of land for activities that are noise-sensitive—such as those occurring in residences, schools, churches, and hospitals—in areas exposed to high noise levels. FAA supports mitigation efforts through two programs that provide federally authorized funds for airport projects that mitigate the effects of noise, and one program that encourages airports to identify and address the noise impacts of their airports on nearby communities. The Airport Improvement Program (AIP) and the Passenger Facility Charge (PFC) program provide federally authorized funding that, among other purposes, can be used to help mitigate the effects of airport-related noise. The AIP, established by the Airport and Airway Improvement Act of 1982, provides federal grants—funded by congressional appropriations from the Airport and Airway Trust Fund—for developing airport infrastructure, including projects that reduce airport-related noise or mitigate its effects. Airports must provide a “matching share” for AIP-funded projects, ranging from 10 percent to 25 percent of a project’s total cost, depending on the type of project and the size of the airport. Two categories of AIP grants are available—apportionment and discretionary. Apportionment funds are distributed by a statutory formula to commercial service airports according to the number of passengers served and the volume of cargo moved, and to the states according to a percentage of the total amount of the appropriated funds. Discretionary funds—for the most part, those amounts remaining after apportionment funds are allotted and certain other amounts are “set aside” for special categories, including noise-related projects—can generally be awarded for eligible projects at any eligible airport, including general aviation airports, which do not receive apportionment funds. Only airports included in FAA’s National Plan of Integrated Airport Systems are eligible for AIP grants. The National Plan of Integrated Airport Systems identifies those U.S. airports that constitute the national airport system, which is designed to ensure that every part of the country has an effective aviation infrastructure. There are 529 commercial service airports—those that receive apportionment funds—and 2,815 general aviation airports (for a total of 3,344 airports) in the current national plan. Furthermore, all projects funded with AIP funds—whether apportionment or discretionary—must be approved by FAA. However, FAA will not approve any grant for any kind of project without written assurances that the airport will take appropriate action, to the extent possible, to restrict the use of land near the airport to uses compatible with airport operations. The AIP funds noise mitigation projects in two ways. First, a specified portion of AIP appropriations is “set aside” by statute specifically for projects that address airport-related noise levels and their effects. Only projects relating to noise may be funded from this set-aside. Table 1 identifies the portions of AIP funds that have historically been set aside for noise. In addition to being eligible for these set-aside funds, projects addressing airport-related noise may compete with other airport development projects for other AIP grants. The second program providing federally authorized funds for mitigating airport-related noise—the PFC program—is a voluntary program that enables airports to impose fees on boarding passengers—known as passenger facility charges—and retain the money for airport infrastructure projects, including noise reduction. Under this program, authorized by the Aviation Safety and Capacity Expansion Act of 1990, commercial service airports may charge boarding passengers a $1, $2, or $3 fee. Airports are not required to impose the fee, but airports wishing to participate in the program must seek FAA’s approval both to levy the fee and to use the revenues for particular development projects. Airlines collect the fees from passengers and transmit them directly to the appropriate airports. FAA officials told us that as long as a project is eligible, meets one of the statutory objectives, and is adequately justified, they do not have the authority to reject an airport’s proposal for the collection or use of passenger facility charges. Although the federal government has no jurisdiction over land use decisions (that authority lies with state and local governments), FAA can facilitate compatible land use planning at the state and local level. The Aviation Safety and Noise Abatement Act of 1979 directed FAA to define land uses that it considers compatible or incompatible with the various noise levels that nearby communities are exposed to. The act also directed FAA to administer a new program that encourages airports to develop maps identifying areas in nearby communities where land uses are considered to be incompatible. The program also encourages airports to develop individual airport noise compatibility programs that include those maps and the projects that have been implemented, or planned, to reduce any existing or potential incompatible land uses identified. The act also requires FAA to approve an airport’s noise compatibility program as long as the program does not place an unreasonable burden on interstate or foreign is reasonably consistent with achieving the goal of reducing incompatible land uses and preventing the introduction of new incompatible land uses, and authorizes needed revisions to the program’s planned projects when noise exposure maps are updated. Programs, except as they relate to flight procedures, are automatically approved if FAA has not acted within 180 days after receipt of the proposed program. Once an airport’s program is approved, the airport can apply for AIP grants to fund the types of projects included in the program that are eligible for federal grants. Through fiscal year 1999, 195 airports had FAA- approved noise compatibility programs, while 212 had approved noise exposure maps. Appendix I describes the process for obtaining FAA approval of the maps and airports’ noise compatibility programs. The Subcommittee on Aviation, House Committee on Transportation and Infrastructure, and several Members of the House of Representatives asked us to address four sets of questions about federal programs for airport development and the alleviation of airport-related noise: What kinds of projects that reduce airport-related noise or mitigate its effects are eligible for federally authorized funding, how do FAA’s selection criteria affect which projects are funded, and to what types of projects have the funds been historically distributed? How do major methods for measuring the impact of airport-related noise compare with each other, and what method has FAA selected? What aircraft noise standards apply to civil subsonic turbojets, and why are some civil subsonic jets not required to comply with these and earlier noise standards? What actions has FAA announced under its Land Use Planning Initiative, what is the status of their implementation, and what issues has the Initiative raised? To address the first set of questions—on the eligibility of noise-related projects for federally authorized funding—we (1) reviewed the statutory provisions and FAA’s regulations, policies, and procedures for funding projects under the AIP and the PFC program to identify project eligibility; (2) reviewed the statutory requirements for airport-related noise compatibility programs, as well as FAA’s regulations and processes for implementing those requirements; (3) obtained FAA’s data on federal grants awarded for noise-related projects and passenger facility charges approved for noise-related projects to identify the types of projects and the total project funding by fiscal year for each type of noise-related project. We also interviewed officials from FAA headquarters in Washington, D.C.; the Airports Council International-North America; the Air Transport Association; and the National Association of State Aviation Officials; as well as other experts on these issues. In 1999, we independently validated the PFC project database and found it to be very reliable (a 0.3-percent error rate). We did not independently review the validity of the grant program database, but it is the only database for that information, and we have used data from it extensively during the conduct of several reviews that have looked at various aspects of the grant program. To address the second set of questions—on comparing methods that measure airport-related noise—we (1) discussed noise measurement methods with FAA, airport officials, the Airports Council International- North America, the Air Transport Association, the National Association of State Aviation Officials, and the Federal Interagency Committee on Noise, as well as other aviation experts, to identify the kinds of methods being used to measure noise levels and the strengths and weaknesses of these methods; (2) reviewed the major noise measurement methods, as well as written descriptions and analyses of them, to determine how each method measured airport-related noise; and (3) identified the statutory requirements for FAA to select a method for environmental impact and land use analyses and the method that FAA chose. To compare and illustrate the kinds of information produced by each method, we designed a model airport and test scenarios; FAA then conducted noise measurements for us for the test scenarios using its Integrated Noise Model, its computerized program for applying noise measurement methods. The methods that were compared are the Maximum Sound Level and the Sound Exposure Levels methods used to measure the noise of a single event, and the Equivalent Sound Level, the Day-Night Sound Level, the Community Noise Equivalent Level, and the Time-Above methods used to measure the levels of noise that nearby communities are exposed to. We discussed the reliability of the Integrated Noise Model with FAA officials and found that they had used appropriate methods—including an independent assessment—to ensure the model’s reliability for measuring noise experienced at certain distances from the source. To address the third set of questions—on aircraft noise standards—we reviewed the statutes, policies, and regulations governing noise levels for civil subsonic jets, and we discussed these statutes, policies, and regulations with FAA officials, representatives of the General Aviation Manufacturers Association, the National Business Aviation Association, and the Regional Aviation Association, and other experts. Through interviews and document review, we identified activities under way in the United States, Europe, and the International Civil Aviation Organization to address the issue of a new level of more stringent aircraft noise standards—commonly referred to as “stage 4” noise standards. To determine the number of aircraft weighing less than 75,000 pounds that were not required to meet FAA’s most recent aircraft noise standards, we determined, from FAA’s list of aircraft in the United States that it has certified as airworthy, the number of civil subsonic jets weighing 75,000 pounds or less. To identify the noise standard that those aircraft met, we reviewed FAA documentation identifying noise stages for certain aircraft, Jane’s All The World’s Aircraft, and aircraft manufacturers’ specifications for aircraft types. While we did not test the validity of FAA’s aircraft database, it is the only source for the information we sought. Finally, to address the fourth set of questions—on FAA’s Land Use Planning Initiative—we identified the overall objective of the Initiative, the initial short-term actions that FAA announced in May 1999, and the status of FAA’s implementation of those actions. To determine if any issues were raised by the Initiative, we reviewed and analyzed the public comments submitted in response to FAA’s request for comments and suggestions on its land use planning effort under the Initiative, as well as published comments analyzing the Initiative. We also interviewed officials at FAA, the Airports Council International-North America, the National Association of State Aviation Officials, and airport and community officials for Dulles International Airport (a large hub without a completed noise compatibility program) and Manassas Regional Airport (a general aviation airport with an approved noise compatibility program), both in Virginia, and other aviation experts to obtain their views. A panel of five experts reviewed the design and methodology for our work. These experts were selected because of their knowledge about aviation and airport-related noise issues and FAA’s noise programs. A list of the panel members appears in appendix IX. We conducted our review from July 1999 through April 2000 in accordance with generally accepted government auditing standards. Most types of projects to reduce airport-related noise or mitigate its effects on nearby communities are eligible for federally authorized funding through the Airport Improvement Program (AIP) and the Passenger Facility Charge (PFC) program. Under the AIP, however, statutes require that, with a few exceptions, projects be part of an airport’s noise compatibility program. Once an airport applies for AIP funding, FAA sets priorities for projects using two types of project selection criteria before awarding the grants. The PFC program is a more flexible funding source than the AIP, in part because projects do not have to be part of an approved noise compatibility program and because airports set their own priorities, subject to FAA approval. Since the programs began, the majority of funds have been used to acquire land and soundproof buildings. The types of noise-related projects eligible for AIP funding include such efforts as developing information to prepare planning and noise compatibility program documents, acquiring land, acquiring air rights or other easements, purchasing noise-monitoring equipment, constructing noise barriers, and soundproofing buildings. The construction or expansion of runways and taxiways, which can reduce noise levels affecting some communities by enabling flights to avoid densely populated areas, is also eligible for AIP funding. There are some statutory restrictions on eligibility. AIP grants may not be approved for land purchases unless the airport provides written assurance that the following conditions will be met: the land will be sold at fair market value as soon as possible once it is no longer needed to help mitigate the effects of noise; an airport will retain a legal interest in the land when it is sold in order to ensure that its use remains compatible; and the government’s share of the cost of purchasing the land will be reimbursed when the land is sold. In addition, federal appropriations law prohibits the use of AIP funds for studies, maps, or environmental impact analyses needed to implement flight procedure changes made to reduce noise. These costs are paid for by other appropriated funds for air traffic control. In addition to these statutory restrictions, FAA policy prohibits using AIP funds for remedial noise mitigation—such as soundproofing buildings—for buildings that were known to be incompatible with prevailing noise exposure levels before they were built. To qualify for AIP funds that are set aside for noise-related projects, an airport must have an FAA-approved noise compatibility program that includes the projects the airport wants funded, except that projects to insulate public buildings used primarily for educational or medical purposes can be funded even though an airport does not have such a program. Nevertheless, FAA approval of an airport’s program does not guarantee that the projects in it will receive AIP noise set-aside funding because an airport must apply for AIP funding separately once its program is approved. In addition, AIP funds may pay for projects that mitigate the noise impact of other airport development projects—such as the construction of a new runway—even if the noise-related projects are not included in an approved noise compatibility program. The airport, however, would have to use its AIP apportionment funds for those projects or the projects would have to compete with other airport development projects for AIP discretionary funds. In deciding which eligible projects to fund, FAA sets priorities using (1) its guidance on land use compatibility and (2) a national priority system that comparatively ranks all projects eligible for AIP funding. When awarding AIP funds for projects included in an airport’s noise compatibility program, FAA gives priority to projects located in areas where noise exposure levels are 65 decibels or higher (when measured under a method that assigns greater weight to flights occurring between 10 p.m. and 7 a.m.). Projects are eligible for funding in areas with lower noise exposure levels. However, according to FAA officials, nearly all of the AIP funds set aside for noise- related projects in the past have been awarded for projects where incompatible land uses occur in areas exposed to noise levels of 65 decibels or higher under FAA’s chosen method for measuring community exposure to airport-related noise. FAA also sets priorities for AIP-eligible projects through a national priority system that comparatively ranks all projects, including noise-related projects, in order to identify those projects that most warrant funding. First, FAA applies a formula that assigns projects a numerical score from 0 to 100—the higher the score, the higher the priority. The formula ranks projects by assigning points for each of four factors: the project’s purpose (for example, safety, security, capacity, planning, reconstruction), with, safety and security projects, for example, receiving more points—higher priority—than projects to develop airport capacity; the size of the airport (for example, large, medium, or small commercial service airports), with projects at larger airports receiving more points than projects at smaller airports; the project’s component (for example, apron, equipment, building, financing), with runway projects, for example, receiving more points than projects for equipment or taxiways; and the project type (for example, noise, by noise exposure level; airport access; construction; de-icing facility; aircraft rescue; or fire-fighting vehicle), with noise-related projects in areas exposed to high noise levels, for example, receiving more points than noise-related projects in areas with lower noise exposure levels. FAA officials then consider other factors—such as benefit-cost analysis, risk assessment, environmental issues, regional priorities, state and metropolitan system plans, airport growth, and market forces—in determining the final ranking of a project. FAA officials have discretion over the relative importance of the formula and other factors in deciding the final ranking of projects. According to an FAA official, projects competing for AIP funds set aside for noise-related projects are ranked on the basis of project type and airport size because the values of the other two factors in the formula are the same for all noise-related projects. As a result, projects in areas with higher noise exposure levels and for larger airports will score higher under the formula than projects in areas with lower noise exposure levels and for smaller airports. When noise-related projects compete for other AIP discretionary funds, however, all four factors in the formula contribute to determining the project’s comparative ranking. Even if an airport’s project ranks relatively high, however, it may not be funded in a given year. According to an FAA official, for the past few years FAA has applied an administrative cap that limits the amount of AIP funding awarded to any single airport in one year for noise-related projects. The limit is $5 million for projects included in an airport’s noise compatibility program and $3 million for insulating public buildings used primarily for educational or medical purposes (whether or not the airport participates in the noise compatibility program). According to the FAA official, FAA imposes the limits when the demand for AIP funds set aside specifically for noise-related projects exceeds the amount of AIP funds available. The limits are intended to ensure that all airports that need funding for noise-related projects have access to AIP funds. The FAA official explained that the agency has exceeded the limit for an airport when sufficient funds were available to meet all demand and the airport was able to document its ability to spend more in that year. The official also said that each year FAA reevaluates whether the limits are needed; if the total cost of the noise projects submitted for funding substantially exceed the money available, the limits will generally remain in effect. The statutes define eligible types of noise-related projects under the PFC program as anything eligible for AIP funding. Unlike most projects funded with AIP grants set aside for noise-related projects, however, PFC projects do not have to be part of an FAA-approved noise compatibility program. Nevertheless, according to FAA officials, FAA requires airports to demonstrate that the projects will provide noise reduction or mitigation and would qualify for inclusion in a noise compatibility program. In addition, unlike AIP funds, PFC funds may be used to pay the financing costs for an approved project and the nonfederal share of projects funded with AIP grants. Airports can set their own priorities, subject to FAA approval, regarding which noise-related projects to fund through the PFC program. More than 75 percent of all AIP funds and over 50 percent of all PFC funds spent on noise reduction or mitigation have been used to acquire land and to soundproof buildings. This is generally true for both large and small airports. In this report, “large” airports are those airports categorized in FAA’s National Plan for Integrated Airport Systems—those airports eligible for AIP grants—as large and medium hub airports. “Small” airports are those categorized as small hub, nonhub, other commercial service, and general aviation airports. Of the nearly $24 billion in AIP grants awarded for fiscal years 1982 through 1999, over $2.7 billion, or 11.5 percent, were for noise-related projects. Of this amount, $1.4 billion (over 50 percent) was used to acquire land for noise mitigation purposes, and $673 million (nearly 25 percent) was used to soundproof buildings. Figure 2 shows the distribution of total AIP funds for noise-related projects by project type for fiscal years 1982 through 1999. Appendix III provides AIP funding data for noise-related projects for each fiscal year, from 1982 through 1999, by project type. About $2.1 billion of the $2.7 billion in AIP noise-related grants went to large airports and about $582 million went to small airports for noise- related projects. As figure 3 shows, both large and small airports targeted their AIP grants for land acquisition and soundproofing buildings. For fiscal years 1992 through 1999, FAA approved the collection of nearly $24 billion in passenger facility charges, with over $1.6 billion, or 6.9 percent, approved for noise-related projects. About $755 million (46 percent) of this funding has been approved for projects that will require multiple phases to complete. These projects consist of one or more different types of projects that are approved together—usually combinations of soundproofing and land acquisition, according to an FAA official. About $481 million (just over 29 percent) has been approved for projects to soundproof buildings, while $378 million (23 percent) has been approved for projects to acquire land. Figure 4 shows the distribution of noise-related projects approved for fiscal years 1992 through 1999, by project type. Appendix IV provides data on the amount of PFC funds approved in each fiscal year, from 1992 through 1999, by project type. Of the $1.6 billion in PFC funds approved for noise-related projects, nearly all was approved for large airports, while about $46 million was approved for small airports. FAA has approved about the same portion of multiple- phase projects for large and small airports at 46 percent ($735 million) and 45 percent ($21 million) respectively. However, large and small airports differ in their use of PFC funds for other types of projects. For example, large airports had a much larger portion of their funds approved for soundproofing buildings. Figure 5 illustrates the funding pattern by project type for large and small airports. Methods for measuring airport-related noise assess noise either from a single takeoff or landing or from the cumulative average noise that nearby communities are exposed to over time. Required by law to select a single method for measuring the impact of airport-related noise on communities, FAA chose a method that measures community exposure levels and that gives greater weight to the impact of flights occurring during the nighttime. While subsequent studies have confirmed that this method best meets the statutory requirement that FAA establish a single system for determining the exposure of people to airport-related noise, a federal interagency committee addressing airport-related noise issues found that supplemental information, such as measures of noise from a single aircraft takeoff or landing, is also useful in explaining the noise that people are likely to hear. In addition, experts and community groups believe FAA’s chosen method provides insufficient information because it does not effectively convey to people what they can actually expect to hear in any given area. To understand the methods used to measure noise, it is necessary to have some understanding of how sound is measured and how it affects humans. Some basic concepts include (1) sound waves and their measurement in decibels, (2) human ability to hear the entire range of sounds made, and (3) noise as a source of interference in people’s activities. First, sound radiates in “waves” from its source and decreases in loudness the further the listener is from the source. As sound radiates from its source, it forms a sphere of sound energy. Sound waves exert sound pressure, commonly called a “sound level” or “noise level,” that is measured in decibels. The higher the number of decibels, the louder the sound appears to someone hearing it. But because decibel levels are measured logarithmically, an increase of only 10 decibels—for example, from 50 decibels to 60 decibels—doubles the loudness that people believe they hear. Continuing the increase from 60 to 70 decibels would again double the perceived loudness of the sound. Which sounds are considered to be noise, however, is subjective. In terms of aircraft noise, sound levels generated by takeoffs or landings vary depending on several factors, particularly the aircraft’s weight and the number of engines. While airport-related noise levels decrease quickly with distance from an airport, the accuracy of noise measurement also decreases because it is more difficult to distinguish between airport-related noise and other noise in the environment. Second, while the human ear can hear a broad range of sounds, it cannot hear all sounds. Sounds with very low pitches (low frequencies) and sounds with extremely high pitches (high frequencies) are generally outside the hearing range of humans. Because of this, environmental noise is usually measured in “A-weighted” decibels. The A-weighted decibel unit focuses on those sounds the human ear hears most clearly and deemphasizes those sounds that humans generally do not hear as clearly. Table 2 illustrates the typical sound levels of some common events. Dishwasher on rinse cycle at 10 feet Bird calls (outdoors) Finally, the impact of noise on communities is usually analyzed or described in terms of the extent to which it annoys people. Annoyance refers to the degree to which noise interferes with activities such as sleep, relaxation, speech, television, school, and business operations. While it is difficult to predict how an individual might respond to, or be affected by, various sounds or noises, some studies indicate that it is possible to estimate what proportion of a population group will be “highly annoyed” by various sound levels created by transportation activities. The findings of a 1978 study that related transportation noise exposure to annoyance in communities has become the generally accepted model for assessing the effects of long-term noise exposure on communities. According to this study, when sound exposure levels are measured by a method that assigns additional weight to sounds occurring between 10 p.m. and 7 a.m., and those sound levels exceed 65 decibels, individuals report a noticeable increase in annoyance. Methods for measuring airport-related noise provide different kinds of information. First, airport-related noise can be measured from single events—such as an individual aircraft’s takeoff or landing—or as the cumulative average level of noise that communities near airports are exposed to over time. Principal methods for measuring cumulative average noise levels identify geographic areas exposed to the same noise levels but apply different weights to flights occurring during different times of the day. The noise from a single takeoff or landing usually starts when the sound can be heard above the background noise; it reaches a maximum sound level and then recedes until the sound is hidden below the background noise level. One of two measures of the noise from a single takeoff or landing is commonly used: (1) the Maximum Sound Level method, which identifies the maximum sound level produced by the event, and (2) the Sound Exposure Level method, which measures the total sound energy that a listener is exposed to during a single event. The Maximum Sound Level method is usually expressed in A-weighted decibels when measuring aircraft events. It does not provide any information, however, about the duration of the event or the amount of sound energy produced. In contrast, the Sound Exposure Level method measures all of the sound energy from the duration of a takeoff or landing to produce the sound level that a person is exposed to from that event. Thus, this method reflects both the intensity and the duration of the sound that the takeoff or landing produces. For aircraft events, this method also usually uses A-weighted decibels. Because this method measures the cumulative sound energy averaged over a single second of time, the sound exposure level for an event that lasts longer than one second will be higher than the maximum sound level for that same event. Also, two events can have the same maximum sound level but different sound exposure levels. The event that lasts the longest will have a higher decibel measure than the shorter event, even though both may have the same maximum sound level. To compare the different kinds of information these methods provide, FAA calculated maximum sound levels and sound exposure levels for single aircraft takeoffs and landings using an airport model that we designed. The results illustrate the different concepts embodied in the two measures of single events. Figure 6 illustrates the measures produced by both methods at one-half mile from the runway and at 1-mile intervals from the runway, for both approach and takeoff operations, for the Boeing 747 and C140 aircraft included in our model. Similar figures for the four other aircraft in our model appear in appendix VI. Measuring the noise from a single takeoff or landing does not reflect or measure the impact of the noise from several takeoffs or landings in comparison with the impact of just one aircraft operation. According to FAA officials, although some research correlates the health and welfare effects of noise generated by certain kinds of single events, the Federal Interagency Committee on Noise pointed out in 1992 that there is no accepted methodology for aggregating the information on the noise levels of single events in a way that would explain the cumulative impact of those events on people in the communities surrounding airports. Thus, by themselves, methods to measure the noise from single events are not considered to describe the overall noise environment. The level of noise from airports that nearby communities are exposed to depends on several factors, including the types of aircraft using the airport, the overall number of takeoffs and landings, the time of day those aircraft operations occur, the runways that are used, weather conditions, and airport-specific flight procedures that affect the noise produced by a takeoff or landing. There are two approaches to measuring community exposure to noise: (1) identifying geographic areas on a map that are exposed to the same noise levels or (2) determining the length of time that a specific geographic area is exposed to particular noise levels. The three main methods for measuring airport-related noise levels that nearby communities are exposed to include (1) the Equivalent Sound Level method; (2) the Day-Night Sound Level method; and (3) the Community Noise Equivalent Level method. These methods provide long-term, or cumulative, measures of exposure to noise. For each method, the key factors that determine the noise exposure level affecting a community are the types of aircraft using the airport, the number and type of engines on an aircraft, the number of takeoffs and landings that occur during an average day, and the time of day during which those aircraft operations occur. The measures are generally presented in the form of “noise contours” on maps—lines around an airport that connect all the areas exposed to the same average sound level. A series of contours are drawn, usually at 5-decibel decrements from the airport, to produce a map that looks similar to a land elevation map. All three methods incorporate both the intensity of sounds produced by single events and the average frequency of those events. The first method—the Equivalent Sound Level—measures the average noise level over a specified time using A-weighted decibels. Because the method is based on a logarithmic average, it gives greater weight to higher noise levels than to lower ones. For example, if sound is measured at 50 decibels for a half hour and 100 decibels for a half hour, the Equivalent Sound Level measure for the entire hour is 97 decibels, not the 75 that would result from simple averaging. Any time period can be used, with typical time periods being 1 hour, or 1 day (24 hours). Under this method, all flights are weighted equally regardless of when they occur during the day. The second method—the Day-Night Sound Level—is the same as the Equivalent Sound Level method for a 24-hour period, but it gives greater weight to flights occurring during the nighttime—between 10 p.m. and 7 a.m. Additional weight is given to nighttime flights because they are more likely to interrupt sleep, relaxation, or other activities and because the background noise level during those hours is lower. To reflect that greater impact, the Day-Night Sound Level method equates 1 nighttime aircraft operation to 10 equivalent daytime operations. This effectively adds 10 decibels to the noise produced by each takeoff or landing that occurs during those nighttime hours. That is, the noise impact of each single nighttime takeoff or landing is reflected in the noise exposure level as if it were 10 daytime takeoffs or landings. For example, if eight takeoffs and eight landings occur between 7 a.m. and 10 p.m., they are reflected in the noise exposure level as 16 aircraft operations. If those same eight takeoffs and eight landings all occur between 10 p.m. and 7 a.m., they are reflected in the noise exposure levels as the equivalent of 160 aircraft operations. Finally, the Community Noise Equivalent Level modifies the Day-Night Sound Level method by adding additional weight to flights occurring between the evening hours of 7 p.m. and 10 p.m. to account for an assumption that greater interference with activities may be occurring during the early evening than during the daytime. The second and third methods are considered to have only small differences. Under each of the three methods, several different combinations of flights can produce the same noise exposure level because factors such as the total number of flights and the type of aircraft affect the noise exposure levels. For example, each of the following three scenarios will produce the same 65 decibel noise exposure level under the Day-Night Sound Level method: 500 aircraft operations with an average sound exposure level of 100 aircraft operations with an average sound exposure level of 50 aircraft operations with an average sound exposure level of 97.4 decibels. Because different combinations of flights can produce the same noise exposure level, and because these methods use additional weighting for evening and/or nighttime flights, FAA does not consider these methods to be good estimators of the noise level produced by a single event. We compared the noise contours produced by these three methods at various decibel levels using our airport model. As figure 7 illustrates, the Equivalent Sound Level method, which does not add weighting to evening or nighttime flights, produced not only the smallest areas exposed to various noise levels but also markedly smaller areas than the other two methods, which include the effects of additional weighting. The noise contours produced by the Day-Night Sound Level method identified areas that ranged from about 2-½ times as large to 3-½ times as large as the areas exposed to the same noise levels under the Equivalent Sound Level method. On the other hand, the size of the areas exposed to the same noise levels were almost identical under the Day-Night Sound Level method and the Community Noise Equivalent Level method. The latter produced a 5 percent or less increase in the size of those areas. At our request, FAA also used our airport model to examine the results from the different measurement methods when (1) flights were shifted by time of day and (2) more aircraft operations were added. In the first scenario, our model illustrated the effect of assigning additional weight to flights occurring during different times of the day. In this scenario, FAA calculated the noise exposure levels for seven different flight schedules. All three methods produced the exact same contours when all flights occurred during the day because no method applies additional weighting to daytime flights. However, when all flights occurred during the nighttime, both the Day-Night Sound Level and the Community Noise Equivalent Level produced contours that quadrupled the size of the areas exposed to the different noise levels. Table 3 illustrates the impact of changing flight schedules. In the second scenario, to understand how the number of aircraft operations at an airport can affect the noise contours, we looked at the results under each of the three methods, for seven cases in which the total number of takeoffs and landings were increased at various increments. The results showed that increasing the number of operations produced a consistent increase in the size of the exposure area at each noise level under each method. That is, the greater the number of operations, the further out each exposure level contour extended from the airport under each method. Consistent with the results illustrated in figure 7, the total area affected by the Equivalent Sound Level method under each scenario was noticeably smaller than that of the other two methods. Also, the size of the areas exposed to each noise level under the Community Noise Equivalent Level method, for each level of operations tested, was less than 5 percent greater than the area affected by the Day-Night Sound Level method. Two other measurement methods can provide additional kinds of information about the noise exposure of a community. The Time-Above method can identify how much time during a designated time period—such as a day—the noise exposure levels will exceed a specified decibel level. The sound level must be specified—for example, 60 decibels. This method can then determine the length of time during a 24-hour period that noise levels will exceed 60 decibels. To illustrate the Time-Above method, our model produced data on how many minutes in a 24-hour day the noise levels would be above 60 and 80 decibels at points one-half mile from each end of the runway and at 1-mile increments from the runway for both approach and takeoff operations. Table 4 illustrates the measures. Another variation of this kind of information is the Lpercent method, which identifies the noise level exceeded for a portion of a time period. The portion must be specified—for example, only 15 percent of a day. This approach might determine, then, that for 15 percent of the day, the noise level exceeded 60 decibels—that is, for the rest of the day the noise level was at or below 60 decibels. FAA’s Integrated Noise Model does not produce measures using this method. Neither the Time-Above method nor this method identifies the time of day the higher noise levels will occur. The Aviation Safety and Noise Abatement Act of 1979 required the Department of Transportation—after consultation with the Environmental Protection Agency—to establish, by regulation, a single system for measuring noise from airports and surrounding areas. The act also required the Secretary to establish a single method for measuring the exposure of individuals to noise resulting from airport operations; that method had to consider noise intensity, duration, frequency, and the time of occurrence. According to a Senate committee report, the act was intended to establish a uniform approach for measuring airport-related noise in order to facilitate the administration of a federal noise abatement program that could, in turn, lead to a uniform approach for dealing with noise problems in general. Pursuant to that directive, in 1981, FAA selected the A-weighted decibel and the Day-Night Sound Level method for measuring airport-related noise. In 1992, the Federal Interagency Committee on Noise noted that the Day- Night Sound Level method was practical and widely accepted. After a comprehensive review of measurement approaches, the interagency committee determined that this method best met the statutory requirements. The committee concluded that there were no other measurement methods of sufficient scientific standing to replace this method as the primary cumulative noise exposure measurement method and that the method correlates well with analyses of community annoyance at various noise exposure levels. The committee also noted that there were no new data to justify a change in the use of extra weighting for nighttime operations. These conclusions are still valid, according to the chairman of the Federal Interagency Committee on Aviation Noise (the successor to the Federal Interagency Committee on Noise), which focuses on aviation research related to noise. A frequent criticism levied against the Day-Night Sound Level method is that it does not effectively convey to people what they can actually expect to hear in any given area, primarily because it does not identify the noise levels generated by single aircraft takeoffs or landings. The noise level produced by the Day-Night Sound Level method is not the noise level that people actually hear on an event by event basis—it is an average of the cumulative sound levels over time. To address this concern, the 1992 interagency committee report noted that supplemental information—particularly information on noise generated by individual takeoffs and landings—has been, and could continue to be, useful, especially in characterizing specific events and in conveying a clearer understanding of the potential effects of noise on people living and working in the area. The interagency committee recommended that federal agencies continue to be allowed to use supplemental information at their discretion when dealing with environmental impact analyses and requirements. An official of the interagency committee noted, however, that while single event information is useful as a supplement, there is no methodology for aggregating the effects of a single event into cumulative impact analysis, as is the case with the Day-Night Sound Level method. Because the interagency committee reiterated the usefulness of the Day- Night Sound Level method, all federal agencies have adopted it for analyzing airport-related noise in their environmental assessments and impact statements. Some agencies, however, such as the Department of Defense, use supplemental noise information, such as single event noise measures, to provide a fuller picture of noise conditions and their potential effects. A proposed revision to FAA’s requirements for environmental analyses states that FAA will also use supplemental information where warranted. The revision adds new guidance on the kinds of supplemental information available and their use. FAA establishes the standards limiting the noise that civil subsonic turbojet aircraft are permitted to generate. Those standards are generally based on an aircraft’s weight and the number of engines and generally allow heavier aircraft to generate more noise than lighter aircraft. The statutory deadline of December 31,1999, for compliance with “stage 3” standards did not apply to aircraft weighing 75,000 pounds or less that were already in operation. As of October 1, 1999, more than 2,750 aircraft were not subject to the stage 3 compliance deadline. FAA regulations establish the maximum noise levels that civil subsonic turbojet aircraft are allowed to generate for takeoff, landing, and “sideline” measurements. The standards for each of these kinds of measurements are different, but, in general, these standards vary with the weight of the aircraft. The standards allow heavier aircraft to be noisier than lighter aircraft because, according to FAA, the noise generated by an aircraft is generally determined by the thrust powering the aircraft; the amount of thrust an aircraft needs is proportional to the weight of the plane—that is, the heavier the aircraft, the more thrust it needs. According to an aircraft noise expert, lower noise standards for lighter aircraft is one of the reasons that a stage 2 aircraft weighing 75,000 pounds or less may make less noise than a heavier aircraft that meets the more stringent stage 3 standards. For takeoff, stage 3 noise standards also vary on the basis of the number of engines; generally, the more engines an aircraft design has, the higher the permitted takeoff noise levels. Stage 3 standards for takeoff, sideline, and approach are shown in appendix VII. The United States is a member of the International Civil Aviation Organization—the international authority on civil aviation standards—and as such participates in that organization’s activities regarding aircraft noise standards. Members of the organization—are considering more stringent noise standards. The organization’s Committee on Aviation Environmental Protection is reviewing several options, identified by its Noise Scenarios Group in a November 1999 report, including: (1) taking no action on more stringent standards, (2) adopting a standard only for new aircraft designs, or (3) adopting more stringent standards with various schedules for the phaseout of noisier aircraft. Guidance governing the Committee’s work directs it to consider such factors as technical feasibility, economic reasonableness, and the environmental benefit to be achieved. The organization is expected to adopt a resolution when it meets in September 2001 on a more stringent standard and the phaseout of stage 3 aircraft. Implementation of the new standard, and phaseout of the noisier aircraft, would be up to the member nations. The European Union has banned, after May 1, 2000, stage 2 aircraft that were modified to meet stage 3 noise standards, unless the aircraft were already operating or registered in a member country before that date. The European Union also adopted restrictions on operating modified aircraft after April 1, 2002. The United States filed a formal complaint with the International Civil Aviation Organization on March 14, 2000, alleging that the European Union’s ban discriminates against U.S. aircraft in violation of the agreement establishing the organization. Both stage 1 and stage 2 aircraft that did not meet more stringent noise standards by specified dates have been prohibited from operating after those deadlines, but that prohibition does not apply to aircraft in service that weigh 75,000 pounds or less. FAA did not require the retirement of the lighter stage 1 aircraft that did not meet stage 2 standards because FAA concluded it was not technologically practicable or economically reasonable to modify these aircraft. The statute prohibiting the operation of stage 2 aircraft that did not meet stage 3 standards by a certain date does not apply to aircraft weighing 75,000 pounds or less. When FAA amends regulations controlling aircraft noise, it must consider several factors, including whether the proposed regulations are technologically practicable, economically reasonable, and appropriate for the types of aircraft, aircraft engines, or aircraft certifications that the regulations apply to. FAA must also consider the extent to which any proposed amendments protect the public health and welfare. In 1976, FAA considered amending its regulations to require stage 1 aircraft already in service to meet stage 2 noise standards or be prohibited from operating at U.S. airports. At that time, the Environmental Protection Agency recommended that the deadline for compliance be applied to all civil subsonic turbojet aircraft regardless of weight. That agency contended that all of those aircraft were capable of meeting stage 2 standards by using various engine modifications or replacement options. It determined that because all newly produced aircraft weighing 75,000 pounds or less had to comply with stage 2 noise standards after January 1, 1975, there seemed to be no valid justification for permitting stage 1 aircraft to operate indefinitely. While some who commented on FAA’s proposed amendment supported the Environmental Protection Agency’s conclusion, others challenged it, contending, for example, that (1) the technology was not available to enable lighter aircraft to meet the stage 2 noise standards or (2) other sources, such as heavier aircraft or traffic from regularly scheduled passenger service flights, were the primary causes of the noise problems. FAA chose not to apply the operating deadline for stage 1 aircraft to aircraft weighing 75,000 pounds or less. FAA concluded that it could not impose operating noise limits on the lighter aircraft at that time in a manner that was fully consistent with its obligations under the law for two reasons. First, FAA determined that the cost-effectiveness of implementing the kinds of modifications needed to retrofit an existing aircraft was questionable and, therefore, not technologically practicable. It concluded that noise reduction modifications to the lighter aircraft could be applied during the original design and manufacture of an aircraft, but such modifications involved substantial redesign efforts that, while reasonable when spread over the production process, were of doubtful cost- effectiveness if accomplished by retrofitting. FAA considered only retrofitting options—engine modification or replacement—as acceptable for meeting noise standards; flight operation noise abatement procedures were not an acceptable means for complying with the noise standards. Second, FAA determined that available information was not sufficient to assess the economic impact on owners of an across-the-board requirement to retrofit the lighter aircraft. Available information was limited because the aircraft were so varied in their use and mission and were frequently the only—or one of a few—aircraft owned by the owner. In addition, FAA determined that the availability of supplies for small engine manufacturers needed further study before FAA could assess the overall economic impact of specific compliance dates on aircraft owners. In December 1997, however, the National Business Aviation Association, a membership organization of companies that operate aircraft, passed a resolution calling for the group’s 5,200 members to refrain from adding new stage 1 aircraft to their fleets beginning in January 2000 and to end the operation of stage 1 aircraft by 2005. The Airport Noise and Capacity Act of 1990 established December 31, 1999, as the deadline for phasing out stage 2 aircraft that were not modified to meet stage 3 noise standards. The statute, however, specifically applied the phaseout only to aircraft weighing more than 75,000 pounds. The legislative history of the act provides no discussion on why the statutory phaseout was not applied to the lighter aircraft. As of October 1, 1999, just over 9,000 civil subsonic turbojet aircraft that weighed 75,000 pounds or less were certified by FAA as airworthy. About 31 percent of those, or just over 2,770, are stage 1 or stage 2 aircraft that may still operate at U.S. airports after December 31, 1999. The 1990 act, however, also established federal review requirements when an airport wants to control noise by imposing more stringent limitations on aircrafts’ use of the airport than federal regulations provide. The act directed the Secretary of Transportation to establish a national program for reviewing airport restrictions on the operation of stage 2 and stage 3 aircraft. It also required the Secretary to study whether federal review should be applied to restrictions on stage 2 aircraft weighing less than 75,000 pounds. The study recommended that the same procedures should apply to all stage 2 aircraft, regardless of weight. FAA adopted that recommendation. Thus, an airport may impose a noise or access restriction on stage 2 aircraft, whatever its weight, if the airport operator publishes the proposed restriction and prepares and makes certain analyses available for public comment at least 180 days before the effective date of the restriction. Unlike noise or access restrictions proposed for stage 3 aircraft, FAA approval is not required. Land use planning is one way that communities can alleviate the impact of airport-related noise in areas near airports. While the federal government has no decision-making authority in land use planning, FAA does have some responsibility to address land use issues in connection with its administration of airport-related noise programs. For example, as required by law, FAA has identified the kinds of land uses that are compatible with various noise levels communities may be exposed to because of a nearby airport. Looking to the future, FAA has announced five short-term actions under its Land Use Planning Initiative, which it launched to help prevent incompatible land uses. Reviewing the comments provided by the aviation sector and the general public, we identified four principal areas of concern associated with the initiative. Through land use planning, communities determine what kinds of development—for example, residential or industrial—will occur within their jurisdictions. Communities can use such land use planning to reduce or alleviate the impact of airport-related noise. For example, communities may prohibit the construction of schools within a certain distance from an airport so that airport-related noise will not interrupt classes. While the federal government has no direct decision-making authority over land use planning, FAA can nevertheless help communities consider the impact of nearby airports as they develop their plans. For example, the Aviation Safety and Noise Abatement Act of 1979 requires FAA to identify land uses that would not be compatible with noise generated by the operation of a nearby airport. As a result, FAA identified some land uses, such as homes and schools, as being incompatible with noise exposure levels of 65 decibels or higher (using the Day-Night Sound Level method) that occur very close to an airport, while other land uses, such as industrial and commercial uses, could successfully be located close to an airport without interfering with activity. Although FAA can provide land use planning guidance, it is up to the state and to local communities to apply this guidance. The recent transition to quieter aircraft can lower noise exposure levels in some communities, but FAA has been concerned that noise levels may rise again around some airports if the number of flights increase to meet the expected growth in passenger levels. According to an FAA official, even where noise levels do not rise, maintaining a buffer zone between the airport and certain land uses, such as homes and schools, serves a general interest in maintaining a quieter environment. Because of its concerns, FAA embarked on a Land Use Planning Initiative to help state and local governments achieve and maintain compatible land uses around airports. Under this Initiative, in January 1995, FAA sponsored a Study Group on Compatible Land Use, which was composed of community, airport, and aviation representatives. This group recommended federal actions that could promote compatible land use planning around airports. In May 1998, FAA issued a request in the Federal Register for additional suggestions to help state and local governments’ planning efforts. After reviewing the submissions, FAA announced in May 1999 that it would implement five short-term actions while it continued its review of other suggestions. FAA expects to announce additional actions in the future on training, education, satellite navigation, research and development, and proposed legislation. The five short-term actions that FAA announced in May 1999 focus primarily on improving the communication of its noise policies and noise compatibility information in order to help communities and airports work together to minimize the noise impacts of airports. Table 5 provides an overview of each action, the FAA office responsible for implementation, and the implementation status of each action. The implementation goal for these short-term actions was originally September 30, 1999. FAA has completed implementation of two of these actions. To establish the information clearinghouse, FAA created an Internet website. To provide a clearer understanding of its actions addressing certain noise exposure situations, FAA issued revisions in June 1999 to its order that provides guidance on conducting environmental impact analyses for airports. The November implementation goal for the remaining three actions was delayed until March 31, 2000, primarily because FAA was reorganizing its Office of Environment and Energy, which is responsible for the Land Use Planning Initiative. As of April 26, 2000, an FAA official expected the remaining actions to be implemented by May 2000. The clearinghouse that FAA established on land use information can be accessed at www.faa.gov/arp/app600/5054a/landuse.htm. According to FAA officials, this website will become the primary means for distributing information made available by some of these short-term actions—including the information packages—and any additional actions approved in the future. The website has links to information on Washington State’s website for its land use planning program and will eventually link to other states that have similar websites. It also incorporates links to websites for land use planning associations, periodicals, and legal planning specialists. FAA plans to add information and/or links as warranted. FAA stated that the objectives of its fifth action include (1) providing greater focus on the use of flight procedures to mitigate the effects of noise over certain areas and (2) emphasizing consultations with airports and communities. FAA’s overall goal is to clarify the actions it might take to address rising noise exposure levels. FAA’s revised guidance, however, does not appear to achieve its objective of providing greater focus on the use of flight procedures because the revisions contain no explicit discussion of the use of flight procedures to mitigate the effects of noise over certain areas. Furthermore, this lack of discussion contrasts with the detailed description FAA provides to incorporate other changes to that same order, including changes pending that pertain to the use of supplemental information in environmental impact analyses. In reviewing the public comments on the Initiative and from our discussions with aviation officials and other experts, we identified four principal areas of concern associated with the Initiative. These areas involve determining (1) what is the most effective use of the agency’s limited resources when addressing airport-related noise, (2) whether the 65 decibel level defining incompatible land uses should be lowered, (3) whether additional information, such as single event noise levels, should be required when analyzing noise impacts, and (4) what is the best use of federally authorized investment in the growth of airport capacity in view of the noise and physical expansion constraints affecting many airports. Table 6 summarizes the context and scope of these issues. Through its responsibilities for aviation noise, FAA plays a critical role in helping to reduce the noise that airports generate and to mitigate the effects of that noise on surrounding communities. While FAA has accomplished much in fulfilling its statutory responsibilities, the issues raised in connection with FAA’s Land Use Planning Initiative are not necessarily new and show that more work remains to be done on resolving controversies regarding airport-related noise. Addressing these issues will require balancing the needs of the different—and often conflicting— interests of airports, airlines, manufacturers, passengers, general aviation, and the communities near the airports. Resolution of these issues will also need to take into account concerns about the environment, as well as advances in technology. We provided the Department of Transportation, the National Association of State Aviation Officials, an advisory panel of five experts, the Airports Council International-North America, the General Aviation Manufacturers Association, and the Air Transport Association of America, Inc. with copies of the draft report for their review and comment. We met with officials from the Department of Transportation, including FAA’s Manager, Community and Environmental Needs Division, and spoke with FAA’s Manager, Noise Division. These officials generally agreed with the facts in the report and provided clarifying comments, which we incorporated as appropriate. The National Association of State Aviation Officials and the advisory panel of experts generally agreed with the facts in the report and provided us with technical and clarifying comments, which we incorporated as appropriate. The Airports Council International- North America provided no comments. We spoke with the President of the General Aviation Manufacturers Association, who stated that the report reflects a good effort to make a difficult topic understandable. However, he said the Association had three concerns about the accuracy of the presentation. The Association believes the draft report (1) implied that aircraft not subject to phased compliance with operating noise limits were not subject to any noise standards, when in fact, all aircraft manufactured after December 31, 1974, must meet stage 3 noise standards; (2) did not explain that the exception of lighter aircraft from compliance with stage 3 operating noise limits was consistent with international operating rules developed by the International Civil Aviation Organization; and (3) overestimated how many aircraft weighing 75,000 pounds or less still operate in the United States. The Association further believed the general aviation aircraft selected for our airport model were not representative of the operating fleet. With regard to the Association’s first concern, we believe the draft report accurately explained the progressive application of noise standards to aircraft. However, we revised it to clarify the distinction between noise standards for the certification of aircraft as airworthy and the application of those standards to operating aircraft. Regarding the second concern, this report focuses on FAA’s roles and responsibilities rather than on international activities. Nevertheless, we revised the draft report to clarify that the United States is a member of the International Civil Aviation Organization and as such participates in that organization’s activities regarding aircraft noise standards. Concerning the final issue, data in our draft report on the number of aircraft weighing 75,000 pounds or less include all such aircraft certificated by FAA as airworthy as of October 1, 1999. In contrast, data provided by the Association include only the operating business fleet, which is a subset of FAA’s list of certificated aircraft. With regard to our selection of aircraft for the model, we began with the universe of certificated aircraft and selected two general aviation aircraft from this list, as well as four others, to reflect both stage 2 and stage 3 aircraft, and lighter and heavier aircraft. We revised the draft report to clarify that we selected aircraft from the list of certificated aircraft. We met with officials from the Air Transport Association of America, Inc., who stated that the draft report was generally very good, but who expressed five concerns. They believe the draft report (1) did not fully recognize, in its discussion of the potential impact of growth in air traffic, the significant progress that the Congress, FAA, airports, and the airlines have made in reducing the number of people exposed to noise from aircraft, nor did it recognize that aircraft used to achieve additional growth may be quieter; (2) did not fully reflect the role of international agreements and obligations related to noise control; (3) was overly broad in its discussion of flight procedures for abating noise when explaining why FAA did not require aircraft weighing 75,000 pounds or less to be retired if they did not meet stage 2 standards; (4) included only two aircraft in the airport model, one of which is no longer being produced, and did not address current production aircraft; and (5) did not fully reflect the relationship and potential trade-offs between noise stringency standards and aircraft emissions. The Association also provided technical and clarifying comments, which we incorporated as appropriate. With regard to the first concern, we agree that the aviation industry and the federal government have made substantial progress in reducing noise generated by airports. However, forecast growth in aviation activity could reduce or eliminate the benefits at individual airports. If current aircraft are replaced with quieter aircraft, the impact of the quieter aircraft on airport-related noise will depend on several factors including the extent to which aircraft operations increase and when operations occur. We revised the draft report to clarify these points. With regard to the second issue, we agree that the international administrative and regulatory framework for developing and implementing aircraft noise standards is important for the aviation industry. However, this report focuses on FAA’s role in major noise-related programs rather than on international activities. Nevertheless, we revised the draft report to clarify that the United States is a member of the International Civil Aviation Organization and as such participates in that organization’s activities regarding aircraft noise standards. Regarding the third concern, our draft report provided FAA’s rationale for not applying a retirement deadline to stage 1 aircraft weighing 75,000 pounds or less. As noted in the report, FAA did not consider flight operations to be an appropriate operational noise abatement procedure for the purpose of meeting aircraft noise standards. As also noted, however, FAA did consider flight operations to be appropriate for further reducing noise where circumstances warrant. Accordingly, we did not revise this discussion in our draft report. With regard to the fourth concern, the Association incorrectly concluded that the airport model included only two aircraft. As appendix V of the report explains, the model was designed to provide a reasonable facsimile of an airport for use in comparing and illustrating the various noise measurement methods. Six aircraft were selected from FAA’s list of certificated aircraft to represent categories of aircraft operations. Aircraft selection was not intended to include only those aircraft currently in production because that would have eliminated stage 2 aircraft from the model. With regard to the final concern, we revised the draft report to acknowledge that reducing aircraft noise may result in higher aircraft emissions.
Pursuant to a congressional request, GAO provided information on airport-related noise, focusing on the: (1) types of projects that are eligible for federally authorized funding to reduce airport-related noise or mitigate its effects; (2) differences in the major methods for measuring the impact of airport-related noise; (3) Federal Aviation Administration's (FAA) noise standards for civil subsonic turbojets and the reasons some of those aircraft are not required to comply with these or earlier standards; and (4) status of FAA's Land Use Planning Initiative and the major issues the initiative has raised about how best to address airport-related noise. GAO noted that: (1) most projects that reduce airport-related noise or mitigate its impact are eligible for federally authorized funding; (2) to be considered for funding under the Airport Improvement Program (AIP), a project must be part of a FAA-approved noise compatibility program; (3) in selecting which noise-related projects to fund, FAA gives priority to projects affecting communities exposed to noise levels of 65 decibels or higher, as determined by FAA's chosen measurement method; (4) in contrast to projects funded by AIP, projects funded by the Passenger Facility Charge program do not have to be part of a noise compatibility program; (5) since the programs began, 75 percent of the grants and over 50 percent of the passenger fees approved for noise-related projects have been used to acquire land and soundproof homes and other buildings; (6) the three principal methods for measuring community exposure are mathematical calculations that differ in the impact each places on noise from flights that occur during different times of the day: (a) one method treats the impact of all flights equally whenever they occur; (b) the second method differs from the first by assigning greater impact to the noise from each flight that occurs during the nighttime than to flights that occur during other times; and (c) the third method assigns additional impact to evening flights as well as nighttime flights; (7) noise standards for regulating aircraft noise from civil subsonic turbojets are generally based on an aircraft's weight and number of engines; (8) the heavier the aircraft and the greater the number of engines, the more noise the aircraft is allowed to generate and still comply with the required noise limits; (9) the newest set of standards--stage 3 standards--apply to all aircraft weighing more than 75,000 pounds and to newly manufactured aircraft weighing 75,000 pounds or less; (10) these lighter aircraft did not have to be retired under earlier noise standards because FAA concluded that it was questionable whether the technology existed to modify those aircraft in a cost-effective manner; (11) under its Initiative, FAA announced five short-term actions in May 1999 designed primarily to provide information that state and local governments can use to improve the compatibility of land uses near airports; and (12) based on comments provided by the aviation sector and the general public, there are four principal areas of concern associated with the Initiative.
DOD Instruction 4151.20 prescribes a “depot maintenance core capabilities determination process” to identify, in part, the (1) required core capabilities for depot maintenance and (2) planned workloads needed to support those capabilities. The instruction describes a series of mathematical computations and adjustments, which the military services use to compute their core capability requirements and to identify planned workloads needed to support these requirements. First, the services identify the weapon systems required to execute the Joint Chiefs of Staff contingency scenarios, which represent plans for responding to conflicts that may occur in the future. After the systems are identified, the services compute annual depot maintenance capability requirements for peacetime in direct labor hours to represent the amount of time they regularly take to perform required maintenance. Then contingency requirements and resource adjustments are made to account for applicable surge factors during the different phases of a contingency, such as preparation/readiness and sustainment. Further adjustments are made to account for redundancy in depot capability. For example, a service may determine that repair capabilities for specific systems maintained in military depots are so similar that the capabilities for one system can effectively satisfy the requirements of another. Core capability requirements are also adjusted when one service’s maintenance requirements will be supported by the maintenance capabilities of other services. During this process of identifying the systems for which they will be required to maintain repair capabilities, the services organize and aggregate their capability data by categories of equipment and technologies known as work breakdown structure categories. The work breakdown structure provides a way for DOD to break down a category of weapon system or equipment into subcategories of its parts at increasingly lower levels of detail. The work breakdown structure can be expressed at any level of detail down to the lowest-level part, such as a bolt. These categories, the programs or systems they include, and the lower-level elements or subcategories of defense materiel or equipment into which they are broken down are referred to by DOD as “levels of indenture.” There are eleven categories at the top level—“first” level—of the work breakdown structure. A first-level category summarizes information for an entire type of system or equipment, such as aircraft or ground vehicles. Table 1 shows the eleven first-level categories of the work breakdown structure. A first-level category can be broken down into second-level subcategories, which are the major elements that make up the system or equipment in the first-level category. For example, the first-level category for Aircraft can be broken down into the second-level subcategories for Airframes, Aircraft Components, and Aircraft Engines, which are major elements that make up an aircraft. The second-level subcategories can be further broken down into third-level subcategories, which are subordinate elements that make up the major elements in the second- level categories. For example, the second-level subcategory for Airframes is further divided into the third-level subcategories—different types of airframes, such as Rotary, Fighter/Attack, or Bomber. The subcategories can be further broken down to the lowest-level element of the system. Table 2 shows an example of the top three levels of the work breakdown structure for Aircraft. After the services have identified their core capability requirements, they identify the amount of available planned workload within the work breakdown structure categories and subcategories. DOD Instruction 4151.20 requires the military services to report biennially to OSD their core capability requirements and planned workloads, in accordance with a tasking memorandum issued for each reporting cycle. The instruction includes a worksheet that the services must fill out and submit to OSD. The worksheet calls for information to be organized by the work breakdown structure to various subcategory levels, mostly at the second-level subcategories. Appendix III provides a table listing these categories and subcategories. On April 9, 2012, OSD issued the tasking memorandum for the 2012 Biennial Core Report, which directed the services to use DOD Instruction 4151.20 as basic guidance and included further guidance on how to meet the requirement under Section 2464 to report this information to Congress. The memorandum augments the worksheet by adding another column for the estimated costs of performing the planned workloads at the first level of categories. The instruction and tasking memorandum also require the services to provide additional information when reporting shortfalls in planned workloads. If a military depot does not have sufficient workload to sustain the required level of capability that has been identified, a shortfall exists—in other words, the military depots have not been assigned the depot maintenance workloads that would enable them to sustain their identified core capability requirements. For example, a depot may have identified 10,000 direct labor hours of core capability requirements for ground vehicles but have only 4,000 hours of assigned depot maintenance work for ground vehicles. This depot will have a shortfall of 6,000 hours. The instruction requires that the services report on shortfalls by providing a description along with the worksheet, but the shortfalls are not calculated in the worksheet. DOD’s 2012 Biennial Core Report to Congress complies with two of the required reporting elements of Section 2464—including core capability requirements and planned workload—and partially complies with the third element by including mitigation plans, but not all detailed rationales for workload shortfalls. Further, the report provides complete information for each of the military services as aggregated to the top-level categories of the work breakdown structure. However, without providing clear explanations for the workload shortfalls that clarify why the services do not have the workload to meet core maintenance requirements, DOD’s report does not fully comply with Section 2464 and Congress lacks full visibility over DOD’s management of its shortfalls. OSD included in the report the requirements information expressed in direct labor hours for each of the military services. As reported, DOD’s total core capability requirements are about 70 million direct labor hours. Table 3 shows a summary of these core capability requirements by military service. Further, the information in DOD’s report on core capability requirements for each of the military services is complete as aggregated to the top-level categories of the work breakdown structure. Section 2464 requires the information in the Biennial Core Report to be organized by work breakdown structure; however, the statute does not specify at which category level of the work breakdown structure the information should be reported. To obtain the information needed to support the 2012 report, OSD’s memorandum directed the services to provide to OSD, among other things, information on core requirements and planned workloads at various lower-level subcategories. The memorandum also directed the services to provide, in any instance where core requirements exceed planned workloads, additional information on a plan to address workload shortfalls. Each of the services provided information in response to OSD’s memorandum. In response to the tasking memorandum, the services provided data on their planned workloads—the amount of available work used to maintain the required capability—by the top categories and various levels of subcategories in the work breakdown structure. In the report, OSD included complete information on the amount of planned workload that is available to maintain the required capability, expressed in direct labor hours at the top-level categories and the estimated cost of these workloads for each of the military services. As reported, DOD has a total planned workload of about 92 million direct labor hours at an estimated cost of about $12 billion. Table 4 shows a summary of these workloads. However, we identified an anomaly in the information reported for the Marine Corps. Its planned workload for the sea ships category was reported as 15,124 direct labor hours, without any reported cost. Because the estimated cost of this workload is reported as $0, it is unclear whether the cost for this work is accounted for in DOD’s report. OSD and Marine Corps officials stated that the workload hours to do these repairs are to be performed by the Marine Corps for the Navy. The Navy would reimburse the Marine Corps for the workload hours. However, the Navy’s submission for the report did not include these workload hours to be performed by the Marine Corps. Thus, these hours and cost were not clearly accounted for in the workload cost figures included in the report. OSD officials stated that they noticed the anomaly, but that their reporting time constraints precluded them from thoroughly investigating it. The report shows that the Navy had a workload of 8.9 million direct labor hours above the core maintenance requirement in the Sea Ships category. Because of this, OSD officials believed that the estimated workload was sufficiently covered and this error would not result in a shortfall. While DOD’s overall planned workloads exceed its core capability requirements, DOD’s report shows shortfalls in certain categories for the Army and the Air Force. The report includes complete information on shortfalls at the top-level categories and plans to mitigate all of the shortfalls identified in the report. However, the report does not include required information on the rationale for some of these shortfalls—the reasons why the services do not have the workloads to meet core maintenance requirements. Section 2464 requires that DOD include in its report “in any case where core depot-level maintenance and repair capability requirements exceed or are expected to exceed sustaining workloads,”—that is, in any case where there are shortfalls—“a detailed rationale for the shortfall and a plan either to correct, or mitigate, the effects of the shortfall.” Consistent with how it reported the core requirements and planned workloads, OSD aggregated the workload shortfalls under the top-level categories of the work breakdown structure for each service. The report shows that the Navy and Marine Corps did not identify any shortfalls in the workloads available to support their core capability requirements. In assessing the completeness of DOD’s report, we determined that the Navy and Marine Corps did not have workload shortfalls at any of the lower-level categories at which they provided information to OSD. The report shows workload shortfalls for the Army and Air Force totaling about 1.4 million direct labor hours. Table 5 shows the shortfalls identified in the report. In assessing the completeness of DOD’s report, we determined that the Army and Air Force identified shortfalls at lower-level subcategories and submitted supplemental information to OSD describing these anticipated shortfalls. For the report, OSD aggregated the information on core requirements and planned workloads provided by the services at the top- level categories of the work breakdown structure. OSD officials told us that the shortfalls included in the report were calculated by taking the difference between the total requirements and planned workload at the top-level categories. Because of this calculation, some of the workload shortfalls identified by the services at the lower-level categories were balanced out by surplus workload in other lower-level categories under the same top-level category. Thus, these lower-level shortfalls were not included in the report. For the Army, the report showed that there are workload shortfalls of approximately 1 million direct labor hours in the top-level categories for Ground Vehicles and Support Equipment. The Army also submitted information to OSD on additional shortfalls in lower-level subcategories totaling approximately 1.5 million direct labor hours. These shortfalls are anticipated in various third-level subcategories under the top categories of Aircraft; Ground Vehicles; Communication/Electronic Equipment; Support Equipment; and Ordnance, Weapons, and Missiles. For example, the Army identified a shortfall of about 625,000 direct labor hours under the third-level subcategory of Communication Systems Equipment, which is under the top-level category for Communication/Electronics Equipment. For the Air Force, the report reflects total workload shortfalls of approximately 404,000 direct labors hours in the two top-level categories of Communication/Electronic Equipment, and Ordnance, Weapons, and Missiles. However, the Air Force also provided information to OSD on additional shortfalls of about 64,000 direct labor hours for the second- level subcategory of Aircraft Components, under the broader Aircraft category. OSD officials told us that they chose to report at the top level because they believe this best reflects the services’ ability to provide core maintenance, as surplus planned workload in lower-level categories could make up for shortfalls in other categories. They noted that skills, facilities, and equipment are transferrable from one system to another within the top-level category of a work breakdown structure, and that aggregation of workload to the top level presents a more-accurate picture of shortfalls. The report provides mitigation plans for identified shortfalls in the Army workload but does not provide explanations for all shortfalls to clarify the reasons why the Army does not have sufficient workload to meet core maintenance requirements. The report identifies Army shortfalls of 869,547 direct labor hours that are needed to support its required core maintenance capability to maintain equipment under its Ground Vehicles category of work and 112,462 direct labor hours under its Support Equipment category of work. The report stated that the shortfall in the Ground Vehicles category includes workload shortfalls for two subcategories—combat vehicles and tactical wheeled vehicles. The report provides both an explanation and a mitigation plan for the shortfall in the combat vehicles subcategory, but does not provide an explanation for the workload shortfalls in the tactical vehicle subcategory. For the combat vehicles shortfall, the report states that the workload shortfall is a result of low usage of ground combat vehicles in current operations. In addition, the report states that in recent years, the Army has executed robust programs to recapitalize and upgrade depot maintenance. Army officials responsible for compiling the Army’s input stated that this resulted in positive health and condition of these systems. Because of this low usage and the recent improvements in the systems, the Army anticipates minimal depot repair for these vehicles at this time. The Army plans to mitigate this shortfall by allowing military depot workers to repair similar vehicles that are used to support other maintenance programs. For tactical wheeled vehicles, the report states that there is a workload shortfall for tactical wheeled vehicles at Red River Army Depot, but does not provide a reason for the shortfall. Army officials stated that the reasons for these shortfalls are the same as the reasons for shortfalls for ground combat vehicles—low usage and recent improvements resulted in reduced workloads in this area. Army officials told us that the Army is anticipating force structure reductions that will significantly lower the amount of tactical wheeled vehicles and result in a lower core maintenance requirement. Army officials told us that the Army is also forming an Integrated Process Team to review the core maintenance requirements for all ground vehicle systems. However, this shortfall mitigation information was not included in the report. The report does not clearly provide an explanation for why there is a workload shortfall in the Ground Support Equipment work—why the Army estimates it will not have the workload to meet its core maintenance requirements. The report only states that the shortfall is related to the repairs of the Rhino Passive Infrared Defeat System and Floating Bridges, as well as other repairs for equipment, such as equipment related to bulk petroleum oil and lubricant distribution. Army officials stated that the reasons for these shortfalls are the same as the reasons for the shortfalls for ground vehicles—low usage and recent improvements resulted in reduced workloads in this area. The Army assessed this category to be at minimal risk, and it plans to use similar workloads to mitigate this shortfall. Further, Army officials stated that they project a decrease in core maintenance requirements in this area because of anticipated force structure changes. However, this shortfall mitigation information was not included in DOD’s report. The report provides mitigation plans for identified shortfalls in the Air Force core capabilities but does not provide explanations for all of the shortfalls. The report identifies an Air Force shortfall of 260,698 direct labor hours that are needed to support its required core capability for maintaining equipment under its Communication/Electronic Equipment category of work. The report also identifies an additional Air Force workload shortfall of 143,280 direct labor hours that are needed to support the Air Force’s required core capability in maintaining equipment under its Ordnance, Weapons, and Missiles category of work. The report does not clearly provide an explanation for why the Air Force anticipates insufficient workload to meet its core maintenance requirements in the Communication/Electronic Equipment category. The report identifies only that the shortfall in communications workload is primarily for unmanned aerial systems ground stations. We asked Air Force officials to clarify the reason for the shortfall, and they told us that the shortfall is caused by the lack of organic (military) depot capability to repair unmanned aerial vehicle ground stations for the anticipated increase in manufacturing of the MQ-1 Predator and MQ-9 Reaperaircraft. According to Air Force officials, contractors currently repair the stations. However, this shortfall explanation information was not included in the report. Capital investment refers to improvements made to facilities or equipment that would make production more efficient or meet expected future needs. The report does not clearly provide an explanation for why the Air Force anticipates insufficient workload to meet its core maintenance requirements in the Ordnance, Weapons, and Missiles category. The report states only that for the Ordnance, Weapons, and Missiles category, the workload shortfall is in missile components. When asked to provide a reason for the shortfall, Air Force officials told us that this shortfall is driven by the lack of organic (military) depot capability for Missile Components work. However, this information is not included in the report. The Air Force plans to mitigate this shortfall by assigning work to Air Force depots to support existing and new weapon systems, such as missile launchers and defensive missile systems for the MQ-1 Predator and MQ-9 Reaper unmanned aerial systems. According to the report, the Air Force plans to begin the work on the MQ-1 and MQ-9 to mitigate the shortfall in the fourth quarter of fiscal year 2012 and complete this work by fiscal year 2017. Additional work on other aircraft weapon systems, such as for the F-35, will also be used to mitigate this shortfall. This additional work will begin in the first quarter of fiscal year 2013, with full implementation over the following 12-24 months. In addition, because the Air Force does not currently have the facilities and personnel at the military depots to execute the identified planned workload, it also identified a capability shortfall. Air Force officials told us that the Air Force had no scheduled capital investments for the assigned work at the military depots at the time of the report. The report does not always include detailed explanations for identified workload shortfalls, because the Army and Air Force did not always provide explanations for them. Without clear explanations for why the services do not have the workload to meet core maintenance requirements, Congress does not have visibility whether the services’ plans to correct or mitigate the shortfalls will address the cause of the shortfalls. Section 2464, among other things, requires DOD to maintain a core maintenance capability that is government-owned and government- operated, assign sufficient workload to support this capability, and report information on this capability to Congress. DOD’s first report to Congress includes most of the required elements. However, it did not provide explanations for all of the identified workload shortfalls. Clear reasons for why the services do not have the workload to meet core maintenance requirements would provide key information for Congress about how the services’ plans to correct or mitigate the shortfalls would be addressing the cause of the shortfalls. Without complete and clear information on this element of the statute, Congress may lack full visibility into the status of DOD’s management of its core capabilities. To ensure that Congress has visibility over the status of DOD’s core depot-level maintenance and repair capability, we recommend that the Secretary of Defense direct the Deputy Assistant Secretary of Defense (Maintenance, Policy, and Programs) to include in the Biennial Core Report to Congress detailed explanations for why services do not have the workload to meet core maintenance requirements for each shortfall identified in the report. We provided a draft of this report to DOD for comment. In its written comments, reproduced in appendix IV, DOD concurred with our recommendation and stated that the department will include an explanation and mitigation plan for each workload shortfall identified in all future reports. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Deputy Assistant Secretary of Defense (Maintenance, Policy, and Programs); the Secretaries of the Army, Navy, and Air Force and the Commandant of the Marine Corps; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5257 or merrittz@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To determine the extent to which the Department of Defense’s (DOD) 2012 Biennial Core Report complies with Section 2464 of Title 10 of the United States Code and includes service data and information required by DOD to support the report, we analyzed the text of DOD’s Biennial Core Report and obtained supporting information on DOD’s core determination process for 2013. One of our analysts reviewed DOD’s report to determine the extent to which it included each element of the mandate, and a second analyst reviewed the first analyst’s conclusions. All initial disagreements between analysts were discussed and resolved through consensus. When the report explicitly included all parts of the mandated element, our assessment is that DOD “complied” with the element. When the report did not explicitly include any part of the element, our assessment is that DOD “did not comply” with the element. If the report included some aspects of an element, but not all, then our assessment is that DOD “partially complied” with the element. We checked to see that the services were each providing the same type of information. To assess the level of completeness of the information, we obtained and analyzed the data and other information that the Office of the Secretary of Defense (OSD) required the military service headquarters to provide in support of the report. We compared the services’ submissions to the reporting template in DOD Instruction 4151.20 in order to determine the extent to which the services submitted information required by DOD’s instruction and identify any inconsistencies or errors. We conducted data- reliability assessments for all the data analyzed and reported upon by performing independent reliability assessments through which individual team members reviewed the services’ submissions to determine (1) whether the requirements were met, and (2) the extent to which the data that was provided supports the responses. The team also reviewed the data provided by the services to OSD to support their respective responses for the Biennial Core Report. Individual team members compared the data provided to OSD to the data published in the report to ensure consistency. The team also met with knowledgeable officials to obtain clarification and understanding of the content of the submissions. From these analyses, the team concluded that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from August 2012 to February 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In 2009, we reported that the Department of Defense (DOD), through its biennial core process, had not comprehensively and accurately assessed whether it had the required core capability to support fielded systems in military depots. We found that, among other things, DOD’s method of compiling and internally reporting core requirements and associated workloads did not reveal specific shortfalls; and Congress lacked visibility of DOD’s core process, because there was no requirement for DOD to provide its Biennial Core Report to Congress. We recommended that the Under Secretary of Defense for Acquisition, Technology and Logistics take several actions related to improving the Biennial Core Report, including requiring DOD to compile and report the services’ core capability requirements, planned workloads, and any shortfalls by work breakdown structure category, and requiring DOD to provide this report to Congress. Table 6 details the three recommendations and one matter for congressional consideration we made in our 2009 report that are relevant to this review of the Biennial Core Report, and the respective actions taken by DOD. In addition to the contact named above, Carleen Bennett, Assistant Director; Gina Hoffman; Joanne Landesman; Jennifer Madison; Michael Silver; Jose Watkins; and Michael Willems made key contributions to this report. Defense Logistics: DOD Input Needed on Implementing Depot Maintenance Study Recommendations. GAO-11-568R. Washington, D.C.: June 30, 2011. Depot Maintenance: Actions Needed to Identify and Establish Core Capability at Military Depots. GAO-09-83. Washington, D.C.: May 14, 2009. DOD Civilian Personnel: Improved Strategic Planning Needed to Help Ensure Viability of DOD’s Civilian Industrial Workforce. GAO-03-472. Washington, D.C.: April 30, 2003.
DOD uses both military depots and contractors to maintain many complex weapon systems and equipment. Recognizing the key role of the depots and the risk of overreliance on contractors, Section 2464 of Title 10 of the U.S. Code requires DOD to maintain a core maintenance capability--a combination of personnel, facilities, equipment, processes, and technology (expressed in direct labor hours) that is government-owned and government-operated--needed to meet contingency and emergency requirements. Section 2464 directs DOD to provide a Biennial Core Report to Congress and include three elements: (1) core capability requirements, (2) planned workloads, and (3) explanations and mitigation plans for any shortfalls between core capability requirements and planned workloads. In response to a requirement in Section 2464, GAO assessed the extent to which the report complied with the statute and included supporting information from the services as required by DOD. GAO reviewed relevant legislation, DOD's 2012 Biennial Core Report, the services' submissions to support the report, and related DOD guidance. The Department of Defense's (DOD) 2012 Biennial Core Report complies with two of the three biennial reporting elements of Section 2464 by including information on core capability requirements and planned workloads available for maintaining these requirements. The Office of the Secretary of Defense (OSD) reported core capability requirements totaling about 70 million direct labor hours for the military services. Also, OSD reported a total of about 92 million direct labor hours for planned workloads with an estimated cost of about $12 billion. OSD reported complete information on core requirements and planned workload at the top-level categories, such as Sea Ships, of the work breakdown structure. The statute directs that this information be organized by work breakdown structure, which is a group of categories of equipment and technologies. The top-level category--an entire type of system or equipment--can be broken down into lower levels of detail or subcategories, such as Aircraft Carriers or Submarines, that make up the system or equipment. DOD's overall planned workloads exceed its core capability requirements, but the report shows shortfalls in certain categories for the Army and the Air Force. The report partially complies with the third biennial reporting element. DOD's report includes information on shortfalls at the top-level categories and plans to mitigate all shortfalls--where requirements exceed planned workload--identified in the report. However, the report does not include required information on the rationale for some of these shortfalls--reasons why the services do not have the workload to meet core requirements. The Navy and Marine Corps did not identify any shortfalls and were not required to provide explanations or mitigation plans. The report includes mitigation plans for shortfalls identified by the Army and the Air Force but does not always provide detailed explanations for why the Army and Air Force do not have sufficient planned workload to meet core requirements. The report does not always include detailed explanations for identified workload shortfalls, because the Army and Air Force did not always provide explanations for them. Without reporting clear explanations for why the services have shortfalls, Congress does not have visibility on whether the services' plans to correct or mitigate the shortfalls will address the cause of the shortfalls. GAO recommends that DOD improve its Biennial Core Report by including detailed explanations of why the services do not have the workload to meet core maintenance requirements for each identified shortfall. In written comments on a draft of the report, DOD concurred with the recommendation.